Test Report: KVM_Linux_crio 19282

                    
                      32a626fe994c067a2713ce1ccf4f75414e4ff172:2024-07-17:35384
                    
                

Test fail (29/326)

Order failed test Duration
39 TestAddons/parallel/Ingress 163.14
41 TestAddons/parallel/MetricsServer 339.64
54 TestAddons/StoppedEnableDisable 154.31
173 TestMultiControlPlane/serial/StopSecondaryNode 142.02
175 TestMultiControlPlane/serial/RestartSecondaryNode 61.13
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.07
180 TestMultiControlPlane/serial/StopCluster 141.92
240 TestMultiNode/serial/RestartKeepsNodes 332.21
242 TestMultiNode/serial/StopMultiNode 141.23
249 TestPreload 351.33
257 TestKubernetesUpgrade 440.7
329 TestStartStop/group/old-k8s-version/serial/FirstStart 280.69
355 TestStartStop/group/embed-certs/serial/Stop 138.95
357 TestStartStop/group/no-preload/serial/Stop 139.09
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.07
361 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.55
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
371 TestStartStop/group/old-k8s-version/serial/SecondStart 756.8
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.45
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.46
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.32
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.82
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 382.63
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 400.01
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 299.98
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 101.65
x
+
TestAddons/parallel/Ingress (163.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-453453 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-453453 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-453453 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6918b754-82dd-4b43-acdd-204f3a8419d3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6918b754-82dd-4b43-acdd-204f3a8419d3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 20.004452109s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-453453 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.156807859s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-453453 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.136
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 addons disable ingress-dns --alsologtostderr -v=1: (1.182608422s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 addons disable ingress --alsologtostderr -v=1: (7.687969942s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-453453 -n addons-453453
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 logs -n 25: (1.289475981s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-188993                                                                     | download-only-188993 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-013846                                                                     | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-669228                                                                     | download-only-669228 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-188993                                                                     | download-only-188993 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-742633 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | binary-mirror-742633                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38237                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-742633                                                                     | binary-mirror-742633 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-453453 --wait=true                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | -p addons-453453                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | -p addons-453453                                                                            |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-453453 ip                                                                            | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	| addons  | disable inspektor-gadget -p                                                                 | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-453453 ssh cat                                                                       | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | /opt/local-path-provisioner/pvc-78518099-7f58-4e6b-b950-2bfc9e8ecd09_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-453453 ssh curl -s                                                                   | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-453453 addons                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:08 UTC | 17 Jul 24 18:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-453453 addons                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:08 UTC | 17 Jul 24 18:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-453453 ip                                                                            | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:04:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:04:20.238019  401374 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:04:20.238276  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:04:20.238286  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:04:20.238290  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:04:20.238492  401374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:04:20.239079  401374 out.go:298] Setting JSON to false
	I0717 18:04:20.239977  401374 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6403,"bootTime":1721233057,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:04:20.240035  401374 start.go:139] virtualization: kvm guest
	I0717 18:04:20.242322  401374 out.go:177] * [addons-453453] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:04:20.243713  401374 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:04:20.243764  401374 notify.go:220] Checking for updates...
	I0717 18:04:20.246141  401374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:04:20.247315  401374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:04:20.248548  401374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:20.249831  401374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:04:20.250986  401374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:04:20.252368  401374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:04:20.284093  401374 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:04:20.285368  401374 start.go:297] selected driver: kvm2
	I0717 18:04:20.285386  401374 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:04:20.285399  401374 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:04:20.286100  401374 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:04:20.286194  401374 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:04:20.301062  401374 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:04:20.301117  401374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:04:20.301348  401374 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:04:20.301384  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:20.301395  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:20.301412  401374 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:04:20.301489  401374 start.go:340] cluster config:
	{Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:04:20.301621  401374 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:04:20.303284  401374 out.go:177] * Starting "addons-453453" primary control-plane node in "addons-453453" cluster
	I0717 18:04:20.304511  401374 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:04:20.304552  401374 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:04:20.304576  401374 cache.go:56] Caching tarball of preloaded images
	I0717 18:04:20.304653  401374 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:04:20.304672  401374 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:04:20.304989  401374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json ...
	I0717 18:04:20.305018  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json: {Name:mkbb6ecf8797c490e907fa1b568b86907773cade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:20.305171  401374 start.go:360] acquireMachinesLock for addons-453453: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:04:20.305233  401374 start.go:364] duration metric: took 45.399µs to acquireMachinesLock for "addons-453453"
	I0717 18:04:20.305258  401374 start.go:93] Provisioning new machine with config: &{Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:04:20.305325  401374 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:04:20.306805  401374 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 18:04:20.306923  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:04:20.306961  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:04:20.321739  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33663
	I0717 18:04:20.322289  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:04:20.323043  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:04:20.323070  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:04:20.323409  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:04:20.323612  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:20.323743  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:20.323883  401374 start.go:159] libmachine.API.Create for "addons-453453" (driver="kvm2")
	I0717 18:04:20.323916  401374 client.go:168] LocalClient.Create starting
	I0717 18:04:20.323956  401374 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:04:20.518397  401374 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:04:20.607035  401374 main.go:141] libmachine: Running pre-create checks...
	I0717 18:04:20.607059  401374 main.go:141] libmachine: (addons-453453) Calling .PreCreateCheck
	I0717 18:04:20.607635  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:20.608126  401374 main.go:141] libmachine: Creating machine...
	I0717 18:04:20.608142  401374 main.go:141] libmachine: (addons-453453) Calling .Create
	I0717 18:04:20.608281  401374 main.go:141] libmachine: (addons-453453) Creating KVM machine...
	I0717 18:04:20.609537  401374 main.go:141] libmachine: (addons-453453) DBG | found existing default KVM network
	I0717 18:04:20.610369  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.610236  401396 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0717 18:04:20.610439  401374 main.go:141] libmachine: (addons-453453) DBG | created network xml: 
	I0717 18:04:20.610463  401374 main.go:141] libmachine: (addons-453453) DBG | <network>
	I0717 18:04:20.610474  401374 main.go:141] libmachine: (addons-453453) DBG |   <name>mk-addons-453453</name>
	I0717 18:04:20.610488  401374 main.go:141] libmachine: (addons-453453) DBG |   <dns enable='no'/>
	I0717 18:04:20.610499  401374 main.go:141] libmachine: (addons-453453) DBG |   
	I0717 18:04:20.610513  401374 main.go:141] libmachine: (addons-453453) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 18:04:20.610525  401374 main.go:141] libmachine: (addons-453453) DBG |     <dhcp>
	I0717 18:04:20.610534  401374 main.go:141] libmachine: (addons-453453) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 18:04:20.610544  401374 main.go:141] libmachine: (addons-453453) DBG |     </dhcp>
	I0717 18:04:20.610551  401374 main.go:141] libmachine: (addons-453453) DBG |   </ip>
	I0717 18:04:20.610560  401374 main.go:141] libmachine: (addons-453453) DBG |   
	I0717 18:04:20.610568  401374 main.go:141] libmachine: (addons-453453) DBG | </network>
	I0717 18:04:20.610596  401374 main.go:141] libmachine: (addons-453453) DBG | 
	I0717 18:04:20.615817  401374 main.go:141] libmachine: (addons-453453) DBG | trying to create private KVM network mk-addons-453453 192.168.39.0/24...
	I0717 18:04:20.679873  401374 main.go:141] libmachine: (addons-453453) DBG | private KVM network mk-addons-453453 192.168.39.0/24 created
	I0717 18:04:20.679908  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.679830  401396 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:20.679920  401374 main.go:141] libmachine: (addons-453453) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 ...
	I0717 18:04:20.679939  401374 main.go:141] libmachine: (addons-453453) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:04:20.679954  401374 main.go:141] libmachine: (addons-453453) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:04:20.960801  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.960616  401396 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa...
	I0717 18:04:21.027115  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:21.026956  401396 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/addons-453453.rawdisk...
	I0717 18:04:21.027151  401374 main.go:141] libmachine: (addons-453453) DBG | Writing magic tar header
	I0717 18:04:21.027162  401374 main.go:141] libmachine: (addons-453453) DBG | Writing SSH key tar header
	I0717 18:04:21.027171  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:21.027077  401396 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 ...
	I0717 18:04:21.027184  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453
	I0717 18:04:21.027219  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 (perms=drwx------)
	I0717 18:04:21.027245  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:04:21.027258  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:04:21.027286  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:04:21.027317  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:04:21.027333  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:21.027344  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:04:21.027363  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:04:21.027377  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:04:21.027389  401374 main.go:141] libmachine: (addons-453453) Creating domain...
	I0717 18:04:21.027407  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:04:21.027423  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:04:21.027438  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home
	I0717 18:04:21.027449  401374 main.go:141] libmachine: (addons-453453) DBG | Skipping /home - not owner
	I0717 18:04:21.028343  401374 main.go:141] libmachine: (addons-453453) define libvirt domain using xml: 
	I0717 18:04:21.028363  401374 main.go:141] libmachine: (addons-453453) <domain type='kvm'>
	I0717 18:04:21.028372  401374 main.go:141] libmachine: (addons-453453)   <name>addons-453453</name>
	I0717 18:04:21.028380  401374 main.go:141] libmachine: (addons-453453)   <memory unit='MiB'>4000</memory>
	I0717 18:04:21.028411  401374 main.go:141] libmachine: (addons-453453)   <vcpu>2</vcpu>
	I0717 18:04:21.028423  401374 main.go:141] libmachine: (addons-453453)   <features>
	I0717 18:04:21.028431  401374 main.go:141] libmachine: (addons-453453)     <acpi/>
	I0717 18:04:21.028437  401374 main.go:141] libmachine: (addons-453453)     <apic/>
	I0717 18:04:21.028445  401374 main.go:141] libmachine: (addons-453453)     <pae/>
	I0717 18:04:21.028451  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028456  401374 main.go:141] libmachine: (addons-453453)   </features>
	I0717 18:04:21.028465  401374 main.go:141] libmachine: (addons-453453)   <cpu mode='host-passthrough'>
	I0717 18:04:21.028469  401374 main.go:141] libmachine: (addons-453453)   
	I0717 18:04:21.028504  401374 main.go:141] libmachine: (addons-453453)   </cpu>
	I0717 18:04:21.028511  401374 main.go:141] libmachine: (addons-453453)   <os>
	I0717 18:04:21.028518  401374 main.go:141] libmachine: (addons-453453)     <type>hvm</type>
	I0717 18:04:21.028525  401374 main.go:141] libmachine: (addons-453453)     <boot dev='cdrom'/>
	I0717 18:04:21.028529  401374 main.go:141] libmachine: (addons-453453)     <boot dev='hd'/>
	I0717 18:04:21.028534  401374 main.go:141] libmachine: (addons-453453)     <bootmenu enable='no'/>
	I0717 18:04:21.028540  401374 main.go:141] libmachine: (addons-453453)   </os>
	I0717 18:04:21.028545  401374 main.go:141] libmachine: (addons-453453)   <devices>
	I0717 18:04:21.028552  401374 main.go:141] libmachine: (addons-453453)     <disk type='file' device='cdrom'>
	I0717 18:04:21.028563  401374 main.go:141] libmachine: (addons-453453)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/boot2docker.iso'/>
	I0717 18:04:21.028568  401374 main.go:141] libmachine: (addons-453453)       <target dev='hdc' bus='scsi'/>
	I0717 18:04:21.028574  401374 main.go:141] libmachine: (addons-453453)       <readonly/>
	I0717 18:04:21.028577  401374 main.go:141] libmachine: (addons-453453)     </disk>
	I0717 18:04:21.028605  401374 main.go:141] libmachine: (addons-453453)     <disk type='file' device='disk'>
	I0717 18:04:21.028622  401374 main.go:141] libmachine: (addons-453453)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:04:21.028631  401374 main.go:141] libmachine: (addons-453453)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/addons-453453.rawdisk'/>
	I0717 18:04:21.028640  401374 main.go:141] libmachine: (addons-453453)       <target dev='hda' bus='virtio'/>
	I0717 18:04:21.028645  401374 main.go:141] libmachine: (addons-453453)     </disk>
	I0717 18:04:21.028654  401374 main.go:141] libmachine: (addons-453453)     <interface type='network'>
	I0717 18:04:21.028660  401374 main.go:141] libmachine: (addons-453453)       <source network='mk-addons-453453'/>
	I0717 18:04:21.028672  401374 main.go:141] libmachine: (addons-453453)       <model type='virtio'/>
	I0717 18:04:21.028677  401374 main.go:141] libmachine: (addons-453453)     </interface>
	I0717 18:04:21.028685  401374 main.go:141] libmachine: (addons-453453)     <interface type='network'>
	I0717 18:04:21.028690  401374 main.go:141] libmachine: (addons-453453)       <source network='default'/>
	I0717 18:04:21.028695  401374 main.go:141] libmachine: (addons-453453)       <model type='virtio'/>
	I0717 18:04:21.028703  401374 main.go:141] libmachine: (addons-453453)     </interface>
	I0717 18:04:21.028720  401374 main.go:141] libmachine: (addons-453453)     <serial type='pty'>
	I0717 18:04:21.028730  401374 main.go:141] libmachine: (addons-453453)       <target port='0'/>
	I0717 18:04:21.028733  401374 main.go:141] libmachine: (addons-453453)     </serial>
	I0717 18:04:21.028739  401374 main.go:141] libmachine: (addons-453453)     <console type='pty'>
	I0717 18:04:21.028744  401374 main.go:141] libmachine: (addons-453453)       <target type='serial' port='0'/>
	I0717 18:04:21.028751  401374 main.go:141] libmachine: (addons-453453)     </console>
	I0717 18:04:21.028756  401374 main.go:141] libmachine: (addons-453453)     <rng model='virtio'>
	I0717 18:04:21.028764  401374 main.go:141] libmachine: (addons-453453)       <backend model='random'>/dev/random</backend>
	I0717 18:04:21.028770  401374 main.go:141] libmachine: (addons-453453)     </rng>
	I0717 18:04:21.028777  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028781  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028786  401374 main.go:141] libmachine: (addons-453453)   </devices>
	I0717 18:04:21.028793  401374 main.go:141] libmachine: (addons-453453) </domain>
	I0717 18:04:21.028800  401374 main.go:141] libmachine: (addons-453453) 
	I0717 18:04:21.034791  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:cb:e5:7a in network default
	I0717 18:04:21.035323  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:21.035338  401374 main.go:141] libmachine: (addons-453453) Ensuring networks are active...
	I0717 18:04:21.036117  401374 main.go:141] libmachine: (addons-453453) Ensuring network default is active
	I0717 18:04:21.036418  401374 main.go:141] libmachine: (addons-453453) Ensuring network mk-addons-453453 is active
	I0717 18:04:21.037148  401374 main.go:141] libmachine: (addons-453453) Getting domain xml...
	I0717 18:04:21.038033  401374 main.go:141] libmachine: (addons-453453) Creating domain...
	I0717 18:04:22.427339  401374 main.go:141] libmachine: (addons-453453) Waiting to get IP...
	I0717 18:04:22.428129  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:22.428575  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:22.428623  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:22.428482  401396 retry.go:31] will retry after 275.951356ms: waiting for machine to come up
	I0717 18:04:22.705991  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:22.706535  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:22.706558  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:22.706485  401396 retry.go:31] will retry after 356.482479ms: waiting for machine to come up
	I0717 18:04:23.065082  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:23.065542  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:23.065569  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:23.065486  401396 retry.go:31] will retry after 375.44866ms: waiting for machine to come up
	I0717 18:04:23.442207  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:23.442672  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:23.442704  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:23.442620  401396 retry.go:31] will retry after 574.721034ms: waiting for machine to come up
	I0717 18:04:24.019349  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:24.019714  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:24.019746  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:24.019670  401396 retry.go:31] will retry after 600.599028ms: waiting for machine to come up
	I0717 18:04:24.621492  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:24.621953  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:24.621979  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:24.621895  401396 retry.go:31] will retry after 626.183649ms: waiting for machine to come up
	I0717 18:04:25.249582  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:25.250011  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:25.250033  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:25.249973  401396 retry.go:31] will retry after 834.131686ms: waiting for machine to come up
	I0717 18:04:26.085481  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:26.085850  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:26.085875  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:26.085792  401396 retry.go:31] will retry after 1.480433748s: waiting for machine to come up
	I0717 18:04:27.568563  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:27.568882  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:27.568911  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:27.568828  401396 retry.go:31] will retry after 1.138683509s: waiting for machine to come up
	I0717 18:04:28.709179  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:28.709602  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:28.709633  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:28.709563  401396 retry.go:31] will retry after 1.557250255s: waiting for machine to come up
	I0717 18:04:30.269361  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:30.269795  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:30.269817  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:30.269747  401396 retry.go:31] will retry after 2.866762957s: waiting for machine to come up
	I0717 18:04:33.140224  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:33.140607  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:33.140672  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:33.140536  401396 retry.go:31] will retry after 3.093750833s: waiting for machine to come up
	I0717 18:04:36.236265  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:36.236662  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:36.236685  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:36.236609  401396 retry.go:31] will retry after 4.356080984s: waiting for machine to come up
	I0717 18:04:40.593935  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.594279  401374 main.go:141] libmachine: (addons-453453) Found IP for machine: 192.168.39.136
	I0717 18:04:40.594304  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has current primary IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.594310  401374 main.go:141] libmachine: (addons-453453) Reserving static IP address...
	I0717 18:04:40.594935  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find host DHCP lease matching {name: "addons-453453", mac: "52:54:00:43:b0:91", ip: "192.168.39.136"} in network mk-addons-453453
	I0717 18:04:40.665266  401374 main.go:141] libmachine: (addons-453453) Reserved static IP address: 192.168.39.136
	I0717 18:04:40.665297  401374 main.go:141] libmachine: (addons-453453) DBG | Getting to WaitForSSH function...
	I0717 18:04:40.665305  401374 main.go:141] libmachine: (addons-453453) Waiting for SSH to be available...
	I0717 18:04:40.667541  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.667883  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.667914  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.668098  401374 main.go:141] libmachine: (addons-453453) DBG | Using SSH client type: external
	I0717 18:04:40.668118  401374 main.go:141] libmachine: (addons-453453) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa (-rw-------)
	I0717 18:04:40.668163  401374 main.go:141] libmachine: (addons-453453) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:04:40.668180  401374 main.go:141] libmachine: (addons-453453) DBG | About to run SSH command:
	I0717 18:04:40.668202  401374 main.go:141] libmachine: (addons-453453) DBG | exit 0
	I0717 18:04:40.788260  401374 main.go:141] libmachine: (addons-453453) DBG | SSH cmd err, output: <nil>: 
	I0717 18:04:40.788555  401374 main.go:141] libmachine: (addons-453453) KVM machine creation complete!
	I0717 18:04:40.788889  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:40.789484  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:40.789647  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:40.789840  401374 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:04:40.789857  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:04:40.791095  401374 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:04:40.791113  401374 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:04:40.791130  401374 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:04:40.791140  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.793438  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.793778  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.793798  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.793946  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.794115  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.794300  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.794423  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.794615  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.794817  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.794827  401374 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:04:40.891999  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:04:40.892031  401374 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:04:40.892041  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.894706  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.895110  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.895138  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.895296  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.895505  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.895667  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.895778  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.896005  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.896215  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.896231  401374 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:04:40.992946  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:04:40.993009  401374 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:04:40.993016  401374 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:04:40.993023  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:40.993257  401374 buildroot.go:166] provisioning hostname "addons-453453"
	I0717 18:04:40.993281  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:40.993476  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.996076  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.996367  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.996396  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.996600  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.996830  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.996986  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.997128  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.997261  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.997490  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.997509  401374 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-453453 && echo "addons-453453" | sudo tee /etc/hostname
	I0717 18:04:41.112003  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-453453
	
	I0717 18:04:41.112033  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.114800  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.115132  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.115165  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.115298  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.115445  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.115646  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.115775  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.115942  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.116153  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.116172  401374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-453453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-453453/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-453453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:04:41.226808  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:04:41.226841  401374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:04:41.226865  401374 buildroot.go:174] setting up certificates
	I0717 18:04:41.226875  401374 provision.go:84] configureAuth start
	I0717 18:04:41.226883  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:41.227229  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.229991  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.230300  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.230329  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.230528  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.233153  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.233480  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.233506  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.233662  401374 provision.go:143] copyHostCerts
	I0717 18:04:41.233818  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:04:41.234005  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:04:41.234089  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:04:41.234153  401374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.addons-453453 san=[127.0.0.1 192.168.39.136 addons-453453 localhost minikube]
	I0717 18:04:41.350196  401374 provision.go:177] copyRemoteCerts
	I0717 18:04:41.350265  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:04:41.350306  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.352877  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.353209  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.353239  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.353380  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.353612  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.353791  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.353971  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.436436  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:04:41.460054  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:04:41.482565  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:04:41.504940  401374 provision.go:87] duration metric: took 278.052103ms to configureAuth
	I0717 18:04:41.504968  401374 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:04:41.505132  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:04:41.505207  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.508114  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.508502  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.508537  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.508672  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.508898  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.509120  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.509269  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.509470  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.509656  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.509677  401374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:04:41.761132  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:04:41.761159  401374 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:04:41.761169  401374 main.go:141] libmachine: (addons-453453) Calling .GetURL
	I0717 18:04:41.762716  401374 main.go:141] libmachine: (addons-453453) DBG | Using libvirt version 6000000
	I0717 18:04:41.766112  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.766499  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.766540  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.766654  401374 main.go:141] libmachine: Docker is up and running!
	I0717 18:04:41.766672  401374 main.go:141] libmachine: Reticulating splines...
	I0717 18:04:41.766681  401374 client.go:171] duration metric: took 21.442756087s to LocalClient.Create
	I0717 18:04:41.766707  401374 start.go:167] duration metric: took 21.442826772s to libmachine.API.Create "addons-453453"
	I0717 18:04:41.766719  401374 start.go:293] postStartSetup for "addons-453453" (driver="kvm2")
	I0717 18:04:41.766729  401374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:04:41.766748  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.766991  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:04:41.767017  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.768962  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.769187  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.769211  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.769338  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.769523  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.769690  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.769840  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.851033  401374 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:04:41.855354  401374 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:04:41.855382  401374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:04:41.855455  401374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:04:41.855482  401374 start.go:296] duration metric: took 88.757268ms for postStartSetup
	I0717 18:04:41.855526  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:41.856089  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.858768  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.859084  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.859119  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.859355  401374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json ...
	I0717 18:04:41.859531  401374 start.go:128] duration metric: took 21.55418822s to createHost
	I0717 18:04:41.859553  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.861892  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.862185  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.862205  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.862330  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.862491  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.862658  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.862760  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.862934  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.863115  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.863129  401374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:04:41.960997  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239481.935458155
	
	I0717 18:04:41.961024  401374 fix.go:216] guest clock: 1721239481.935458155
	I0717 18:04:41.961040  401374 fix.go:229] Guest: 2024-07-17 18:04:41.935458155 +0000 UTC Remote: 2024-07-17 18:04:41.859542036 +0000 UTC m=+21.655321364 (delta=75.916119ms)
	I0717 18:04:41.961097  401374 fix.go:200] guest clock delta is within tolerance: 75.916119ms
	I0717 18:04:41.961108  401374 start.go:83] releasing machines lock for "addons-453453", held for 21.655862836s
	I0717 18:04:41.961140  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.961444  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.964264  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.964676  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.964701  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.964855  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965399  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965598  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965713  401374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:04:41.965780  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.965813  401374 ssh_runner.go:195] Run: cat /version.json
	I0717 18:04:41.965837  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.968520  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.968918  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.968944  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969009  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969098  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.969261  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.969433  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.969456  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.969476  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969641  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.969747  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.969770  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.969934  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.970098  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:42.067229  401374 ssh_runner.go:195] Run: systemctl --version
	I0717 18:04:42.073039  401374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:04:42.233292  401374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:04:42.239429  401374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:04:42.239495  401374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:04:42.255796  401374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:04:42.255824  401374 start.go:495] detecting cgroup driver to use...
	I0717 18:04:42.255910  401374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:04:42.271553  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:04:42.284805  401374 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:04:42.284870  401374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:04:42.298507  401374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:04:42.311587  401374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:04:42.420275  401374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:04:42.559226  401374 docker.go:233] disabling docker service ...
	I0717 18:04:42.559312  401374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:04:42.573381  401374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:04:42.585885  401374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:04:42.711110  401374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:04:42.838705  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:04:42.852306  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:04:42.869920  401374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:04:42.869978  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.880071  401374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:04:42.880130  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.890387  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.900425  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.910537  401374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:04:42.920866  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.930972  401374 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.947623  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.957841  401374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:04:42.966817  401374 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:04:42.966870  401374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:04:42.979516  401374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:04:42.988650  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:04:43.104191  401374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:04:43.235666  401374 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:04:43.235770  401374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:04:43.240382  401374 start.go:563] Will wait 60s for crictl version
	I0717 18:04:43.240459  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:04:43.244006  401374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:04:43.282118  401374 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:04:43.282222  401374 ssh_runner.go:195] Run: crio --version
	I0717 18:04:43.310168  401374 ssh_runner.go:195] Run: crio --version
	I0717 18:04:43.339548  401374 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:04:43.340692  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:43.343416  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:43.343720  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:43.343749  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:43.344014  401374 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:04:43.348093  401374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:04:43.361061  401374 kubeadm.go:883] updating cluster {Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:04:43.361179  401374 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:04:43.361237  401374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:04:43.395918  401374 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:04:43.395998  401374 ssh_runner.go:195] Run: which lz4
	I0717 18:04:43.399795  401374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:04:43.403952  401374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:04:43.403985  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:04:44.678210  401374 crio.go:462] duration metric: took 1.278436492s to copy over tarball
	I0717 18:04:44.678304  401374 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:04:46.832027  401374 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153675523s)
	I0717 18:04:46.832079  401374 crio.go:469] duration metric: took 2.153831936s to extract the tarball
	I0717 18:04:46.832091  401374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:04:46.869061  401374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:04:46.908534  401374 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:04:46.908561  401374 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:04:46.908572  401374 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.30.2 crio true true} ...
	I0717 18:04:46.908722  401374 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-453453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:04:46.908811  401374 ssh_runner.go:195] Run: crio config
	I0717 18:04:46.953055  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:46.953085  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:46.953101  401374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:04:46.953122  401374 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-453453 NodeName:addons-453453 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:04:46.953267  401374 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-453453"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:04:46.953326  401374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:04:46.963316  401374 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:04:46.963378  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:04:46.972564  401374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:04:46.988106  401374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:04:47.003249  401374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 18:04:47.018480  401374 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0717 18:04:47.022215  401374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:04:47.033901  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:04:47.169592  401374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:04:47.185971  401374 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453 for IP: 192.168.39.136
	I0717 18:04:47.185998  401374 certs.go:194] generating shared ca certs ...
	I0717 18:04:47.186021  401374 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.186177  401374 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:04:47.344873  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt ...
	I0717 18:04:47.344905  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt: {Name:mka017c54a2048ec5188c8b3a316b09643283b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.345101  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key ...
	I0717 18:04:47.345117  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key: {Name:mkfada9ce6d628899b584576941d3e5f9fe82031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.345224  401374 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:04:47.480337  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt ...
	I0717 18:04:47.480372  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt: {Name:mk4e23d96745a6551e62956a93a29bb8c111fa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.480587  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key ...
	I0717 18:04:47.480604  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key: {Name:mkc1f95c0ca70a76682e287edc9dc8ffe7c48cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.480705  401374 certs.go:256] generating profile certs ...
	I0717 18:04:47.480782  401374 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key
	I0717 18:04:47.480803  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt with IP's: []
	I0717 18:04:47.622758  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt ...
	I0717 18:04:47.622794  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: {Name:mk775f59f966ea9acd9c047f3474be2d435176c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.623008  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key ...
	I0717 18:04:47.623024  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key: {Name:mk1fc9bd34eb9588b680b756e27c4a6f01cc67a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.623134  401374 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48
	I0717 18:04:47.623157  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.136]
	I0717 18:04:47.805937  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 ...
	I0717 18:04:47.805971  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48: {Name:mk2c8db22765fc408949dc2494cce2ef703a745e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.806136  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48 ...
	I0717 18:04:47.806151  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48: {Name:mkd6774392db8e9d01d1cb342f4b7173250d8c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.806222  401374 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt
	I0717 18:04:47.806293  401374 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key
	I0717 18:04:47.806337  401374 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key
	I0717 18:04:47.806354  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt with IP's: []
	I0717 18:04:47.939103  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt ...
	I0717 18:04:47.939133  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt: {Name:mk7134d15e9441f1be34b4a25ffa1d9fac41bae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.939294  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key ...
	I0717 18:04:47.939305  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key: {Name:mk549472ec4ec266d1c194199dbcf4f049f11375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.939470  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:04:47.939506  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:04:47.939531  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:04:47.939554  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:04:47.940131  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:04:47.966665  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:04:47.988464  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:04:48.010561  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:04:48.037177  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 18:04:48.060465  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:04:48.082666  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:04:48.104419  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:04:48.127446  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:04:48.154038  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:04:48.170925  401374 ssh_runner.go:195] Run: openssl version
	I0717 18:04:48.176771  401374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:04:48.187191  401374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.191462  401374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.191510  401374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.197340  401374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:04:48.207474  401374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:04:48.211375  401374 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:04:48.211431  401374 kubeadm.go:392] StartCluster: {Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:04:48.211515  401374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:04:48.211566  401374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:04:48.252746  401374 cri.go:89] found id: ""
	I0717 18:04:48.252826  401374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:04:48.263131  401374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:04:48.273268  401374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:04:48.282951  401374 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:04:48.282972  401374 kubeadm.go:157] found existing configuration files:
	
	I0717 18:04:48.283016  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:04:48.292171  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:04:48.292226  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:04:48.302501  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:04:48.314007  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:04:48.314071  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:04:48.325193  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:04:48.334131  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:04:48.334187  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:04:48.343943  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:04:48.352841  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:04:48.352890  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:04:48.362806  401374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:04:48.421393  401374 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:04:48.421455  401374 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:04:48.548175  401374 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:04:48.548268  401374 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:04:48.548380  401374 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:04:48.751932  401374 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:04:48.857318  401374 out.go:204]   - Generating certificates and keys ...
	I0717 18:04:48.857450  401374 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:04:48.857527  401374 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:04:48.944043  401374 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:04:49.070400  401374 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:04:49.122961  401374 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:04:49.289889  401374 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:04:49.475463  401374 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:04:49.475753  401374 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-453453 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0717 18:04:49.658320  401374 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:04:49.658519  401374 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-453453 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0717 18:04:49.945905  401374 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:04:50.017826  401374 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:04:50.077618  401374 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:04:50.077883  401374 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:04:50.262277  401374 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:04:50.351077  401374 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:04:50.539512  401374 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:04:50.701755  401374 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:04:50.913907  401374 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:04:50.915004  401374 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:04:50.918636  401374 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:04:50.920399  401374 out.go:204]   - Booting up control plane ...
	I0717 18:04:50.920526  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:04:50.920620  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:04:50.921030  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:04:50.936603  401374 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:04:50.937808  401374 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:04:50.937855  401374 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:04:51.058524  401374 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:04:51.058658  401374 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:04:51.559796  401374 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.750612ms
	I0717 18:04:51.559918  401374 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:04:57.057996  401374 kubeadm.go:310] [api-check] The API server is healthy after 5.502256504s
	I0717 18:04:57.070807  401374 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:04:57.085092  401374 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:04:57.114357  401374 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:04:57.114536  401374 kubeadm.go:310] [mark-control-plane] Marking the node addons-453453 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:04:57.126681  401374 kubeadm.go:310] [bootstrap-token] Using token: abmxn2.f1edq7xeq2k2tcps
	I0717 18:04:57.128121  401374 out.go:204]   - Configuring RBAC rules ...
	I0717 18:04:57.128245  401374 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:04:57.134204  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:04:57.144629  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:04:57.147626  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:04:57.150690  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:04:57.154130  401374 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:04:57.464793  401374 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:04:57.901474  401374 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:04:58.464700  401374 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:04:58.465680  401374 kubeadm.go:310] 
	I0717 18:04:58.465795  401374 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:04:58.465830  401374 kubeadm.go:310] 
	I0717 18:04:58.465939  401374 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:04:58.465951  401374 kubeadm.go:310] 
	I0717 18:04:58.466004  401374 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:04:58.466091  401374 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:04:58.466171  401374 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:04:58.466185  401374 kubeadm.go:310] 
	I0717 18:04:58.466261  401374 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:04:58.466295  401374 kubeadm.go:310] 
	I0717 18:04:58.466382  401374 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:04:58.466400  401374 kubeadm.go:310] 
	I0717 18:04:58.466483  401374 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:04:58.466577  401374 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:04:58.466673  401374 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:04:58.466682  401374 kubeadm.go:310] 
	I0717 18:04:58.466796  401374 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:04:58.466916  401374 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:04:58.466928  401374 kubeadm.go:310] 
	I0717 18:04:58.467002  401374 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token abmxn2.f1edq7xeq2k2tcps \
	I0717 18:04:58.467092  401374 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 18:04:58.467113  401374 kubeadm.go:310] 	--control-plane 
	I0717 18:04:58.467117  401374 kubeadm.go:310] 
	I0717 18:04:58.467225  401374 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:04:58.467255  401374 kubeadm.go:310] 
	I0717 18:04:58.467362  401374 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token abmxn2.f1edq7xeq2k2tcps \
	I0717 18:04:58.467494  401374 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 18:04:58.467647  401374 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:04:58.467770  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:58.467789  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:58.469449  401374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:04:58.470561  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:04:58.481718  401374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:04:58.512316  401374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:04:58.512409  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:58.512429  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-453453 minikube.k8s.io/updated_at=2024_07_17T18_04_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=addons-453453 minikube.k8s.io/primary=true
	I0717 18:04:58.537038  401374 ops.go:34] apiserver oom_adj: -16
	I0717 18:04:58.658298  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:59.158767  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:59.658591  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:00.159162  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:00.659074  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:01.158649  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:01.658349  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:02.158619  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:02.659173  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:03.159319  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:03.658321  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:04.158335  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:04.658553  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:05.159110  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:05.659114  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:06.159298  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:06.659138  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:07.159002  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:07.659099  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:08.158583  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:08.659093  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:09.159051  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:09.658358  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.159121  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.658390  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.749327  401374 kubeadm.go:1113] duration metric: took 12.236977163s to wait for elevateKubeSystemPrivileges
	I0717 18:05:10.749382  401374 kubeadm.go:394] duration metric: took 22.537956123s to StartCluster
	I0717 18:05:10.749405  401374 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:10.749549  401374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:05:10.750075  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:10.750279  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:05:10.750300  401374 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:05:10.750371  401374 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 18:05:10.750499  401374 addons.go:69] Setting yakd=true in profile "addons-453453"
	I0717 18:05:10.750515  401374 addons.go:69] Setting helm-tiller=true in profile "addons-453453"
	I0717 18:05:10.750526  401374 addons.go:69] Setting cloud-spanner=true in profile "addons-453453"
	I0717 18:05:10.750550  401374 addons.go:69] Setting registry=true in profile "addons-453453"
	I0717 18:05:10.750545  401374 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-453453"
	I0717 18:05:10.750565  401374 addons.go:69] Setting storage-provisioner=true in profile "addons-453453"
	I0717 18:05:10.750583  401374 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-453453"
	I0717 18:05:10.750584  401374 addons.go:69] Setting volumesnapshots=true in profile "addons-453453"
	I0717 18:05:10.750524  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:05:10.750591  401374 addons.go:69] Setting ingress-dns=true in profile "addons-453453"
	I0717 18:05:10.750591  401374 addons.go:69] Setting ingress=true in profile "addons-453453"
	I0717 18:05:10.750601  401374 addons.go:234] Setting addon volumesnapshots=true in "addons-453453"
	I0717 18:05:10.750601  401374 addons.go:69] Setting metrics-server=true in profile "addons-453453"
	I0717 18:05:10.750608  401374 addons.go:234] Setting addon storage-provisioner=true in "addons-453453"
	I0717 18:05:10.750614  401374 addons.go:234] Setting addon ingress-dns=true in "addons-453453"
	I0717 18:05:10.750617  401374 addons.go:234] Setting addon ingress=true in "addons-453453"
	I0717 18:05:10.750631  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750553  401374 addons.go:69] Setting gcp-auth=true in profile "addons-453453"
	I0717 18:05:10.750642  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750540  401374 addons.go:234] Setting addon yakd=true in "addons-453453"
	I0717 18:05:10.750651  401374 mustload.go:65] Loading cluster: addons-453453
	I0717 18:05:10.750660  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750674  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750676  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750538  401374 addons.go:69] Setting default-storageclass=true in profile "addons-453453"
	I0717 18:05:10.750715  401374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-453453"
	I0717 18:05:10.750830  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:05:10.750986  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751017  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.750555  401374 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-453453"
	I0717 18:05:10.751061  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751065  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750585  401374 addons.go:234] Setting addon cloud-spanner=true in "addons-453453"
	I0717 18:05:10.750575  401374 addons.go:69] Setting volcano=true in profile "addons-453453"
	I0717 18:05:10.751079  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751097  401374 addons.go:234] Setting addon volcano=true in "addons-453453"
	I0717 18:05:10.751101  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751102  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751114  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751120  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751127  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751162  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750524  401374 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-453453"
	I0717 18:05:10.750586  401374 addons.go:234] Setting addon helm-tiller=true in "addons-453453"
	I0717 18:05:10.751183  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.750621  401374 addons.go:234] Setting addon metrics-server=true in "addons-453453"
	I0717 18:05:10.751212  401374 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-453453"
	I0717 18:05:10.751163  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750631  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750594  401374 addons.go:69] Setting inspektor-gadget=true in profile "addons-453453"
	I0717 18:05:10.750567  401374 addons.go:234] Setting addon registry=true in "addons-453453"
	I0717 18:05:10.751069  401374 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-453453"
	I0717 18:05:10.751265  401374 addons.go:234] Setting addon inspektor-gadget=true in "addons-453453"
	I0717 18:05:10.751233  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751392  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751120  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751469  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751599  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751623  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751762  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751784  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751825  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751844  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751853  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751853  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751887  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751903  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751942  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.752058  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.752287  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752311  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752328  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752334  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752423  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752465  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752570  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752640  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.753109  401374 out.go:177] * Verifying Kubernetes components...
	I0717 18:05:10.754811  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:10.771453  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0717 18:05:10.771473  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0717 18:05:10.771453  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0717 18:05:10.772114  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.772218  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.772817  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.772838  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.772989  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.772999  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.773139  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.773713  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.773735  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.773785  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.774130  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.774310  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.774352  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.774772  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.774952  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0717 18:05:10.776855  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.780090  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0717 18:05:10.780428  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.780463  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.780610  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.780647  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.781063  401374 addons.go:234] Setting addon default-storageclass=true in "addons-453453"
	I0717 18:05:10.781111  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.781462  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.781503  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.789678  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0717 18:05:10.789869  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0717 18:05:10.789992  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792500  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792507  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792636  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792670  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.792695  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794155  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.794238  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794266  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794312  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794342  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794414  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794432  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794773  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.794846  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.802174  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802192  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802174  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802441  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.802935  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.802992  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.803104  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.803157  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.804739  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0717 18:05:10.805287  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.806317  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.806336  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.807046  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.808098  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.808128  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.808811  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.809176  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.809211  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.815040  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0717 18:05:10.815600  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.816096  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.816117  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.816197  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0717 18:05:10.816522  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.816595  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.816722  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.817181  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.817198  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.818669  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.818741  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0717 18:05:10.819296  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.819875  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.819899  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.820303  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.820466  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.820576  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.821520  401374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:05:10.821958  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.822002  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.823246  401374 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:05:10.823268  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:05:10.823291  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.823388  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0717 18:05:10.823418  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.824649  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.825488  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.825510  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.825643  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 18:05:10.826244  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.827276  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.827418  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 18:05:10.827442  401374 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 18:05:10.827467  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.827473  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.827543  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.827561  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.827574  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.827584  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.827898  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.828096  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.828333  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.832614  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.832623  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0717 18:05:10.832646  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.832664  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.832626  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.833021  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.833068  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.833464  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.833538  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.833568  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.833685  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.833981  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.834556  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.834598  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.838781  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36431
	I0717 18:05:10.838811  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0717 18:05:10.838785  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0717 18:05:10.839292  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839362  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839390  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839866  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.839867  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.839888  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.839903  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.840022  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.840045  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.840259  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.840265  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.840881  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.840937  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.841383  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.841430  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.842728  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0717 18:05:10.842875  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0717 18:05:10.843047  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.843227  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.843258  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.843746  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.843765  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.844137  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.844588  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.844693  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.844713  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.845664  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.846066  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.848471  401374 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-453453"
	I0717 18:05:10.848538  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.848907  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.848938  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.849515  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.849791  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.849835  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.851270  401374 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 18:05:10.851995  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0717 18:05:10.852436  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.852464  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 18:05:10.852480  401374 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 18:05:10.852542  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.852920  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0717 18:05:10.853180  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.853195  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.853630  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.853743  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.854235  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.854253  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.854538  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.854703  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.855479  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.855517  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.857017  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.857514  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.857538  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.857866  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.858094  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.858553  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I0717 18:05:10.858735  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.859015  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.859310  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.859505  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.860176  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.860195  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.860932  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.861010  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 18:05:10.861571  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.861611  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.863827  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 18:05:10.864992  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 18:05:10.865310  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0717 18:05:10.865814  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.866418  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.866435  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.866752  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0717 18:05:10.867173  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.867409  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 18:05:10.867679  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.867702  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.868102  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.868289  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.868403  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.868652  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.869693  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 18:05:10.870167  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.871519  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 18:05:10.871547  401374 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 18:05:10.872763  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 18:05:10.873054  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:05:10.873069  401374 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:05:10.873090  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.874454  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0717 18:05:10.874596  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0717 18:05:10.874787  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0717 18:05:10.874975  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.875197  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 18:05:10.875416  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.875434  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.875499  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.875628  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.876118  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.876135  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.876442  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 18:05:10.876461  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 18:05:10.876508  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.876549  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.876626  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0717 18:05:10.876681  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.876703  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.876771  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.877090  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.877217  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.877276  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.877338  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.877358  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.877433  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.877551  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.877776  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.877974  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.878121  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.878134  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.878196  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.878418  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.878477  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.879339  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.879615  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.880022  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.880820  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.880868  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.881320  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.881345  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.882204  401374 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 18:05:10.883610  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 18:05:10.883626  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 18:05:10.883645  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.883772  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.883843  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0717 18:05:10.883871  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.883942  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.884627  401374 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 18:05:10.884861  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.885172  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.885226  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.885507  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.886329  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:10.886526  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.886543  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.886615  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I0717 18:05:10.886898  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.886956  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.887232  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.887251  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.887314  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.887331  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.887888  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.887959  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.887976  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.888227  401374 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 18:05:10.888353  401374 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:05:10.888649  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 18:05:10.888460  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.888672  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.888467  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0717 18:05:10.888500  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.888850  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.888902  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.889373  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:10.889473  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.889873  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:10.889914  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:10.889640  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.890132  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.890343  401374 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 18:05:10.890355  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 18:05:10.890371  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.890434  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:10.890464  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.890489  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:10.890558  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:10.890574  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:10.890581  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:10.890798  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:10.890816  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 18:05:10.890896  401374 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 18:05:10.890800  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:10.892958  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.892975  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.893390  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.893514  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 18:05:10.893562  401374 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 18:05:10.893646  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.893775  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.894080  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.894099  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894335  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.894393  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894551  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.894722  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.894749  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894773  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.894948  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.895012  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.895039  401374 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:05:10.895054  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 18:05:10.895072  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.895140  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.895268  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.895424  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.895762  401374 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 18:05:10.895779  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 18:05:10.895795  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.896717  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.897136  401374 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:05:10.897151  401374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:05:10.897168  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.898679  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.898938  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0717 18:05:10.899305  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.899329  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.899335  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.899532  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.899737  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.899856  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.899917  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.899954  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.899978  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.900124  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.900392  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.900457  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.900472  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.900671  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 18:05:10.900678  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.900735  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.900949  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.901096  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.901160  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.901217  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.901854  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.901872  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.902367  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.902823  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.902837  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.903025  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.903220  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.903247  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.903500  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.903722  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.903864  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.903998  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.904763  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.904815  401374 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 18:05:10.906284  401374 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 18:05:10.906342  401374 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 18:05:10.906354  401374 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 18:05:10.906373  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.909275  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.909697  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.909724  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.909780  401374 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 18:05:10.909881  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0717 18:05:10.909982  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.910163  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.910233  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.910336  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.910460  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.910699  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.910712  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.911035  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.911210  401374 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 18:05:10.911221  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 18:05:10.911233  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.911650  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.911687  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.913784  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.914133  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.914164  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.914285  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.914457  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.914611  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.914787  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.956750  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0717 18:05:10.957156  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.957681  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.957710  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.958046  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.958284  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.959951  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.962156  401374 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 18:05:10.963754  401374 out.go:177]   - Using image docker.io/busybox:stable
	I0717 18:05:10.965313  401374 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 18:05:10.965331  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 18:05:10.965350  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.968796  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.969272  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.969298  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.969505  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.969738  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.969919  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.970089  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	W0717 18:05:10.972731  401374 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58326->192.168.39.136:22: read: connection reset by peer
	I0717 18:05:10.972764  401374 retry.go:31] will retry after 168.473413ms: ssh: handshake failed: read tcp 192.168.39.1:58326->192.168.39.136:22: read: connection reset by peer
	I0717 18:05:11.159669  401374 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 18:05:11.159698  401374 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 18:05:11.267381  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 18:05:11.267414  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 18:05:11.269269  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:05:11.306620  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 18:05:11.306653  401374 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 18:05:11.319172  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:05:11.319201  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 18:05:11.335081  401374 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:05:11.335105  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 18:05:11.339854  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 18:05:11.350335  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 18:05:11.350359  401374 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 18:05:11.366658  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:05:11.416690  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 18:05:11.416739  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 18:05:11.418645  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 18:05:11.418670  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 18:05:11.425034  401374 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 18:05:11.425068  401374 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 18:05:11.429887  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:05:11.536284  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:05:11.536326  401374 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 18:05:11.541347  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:05:11.597602  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 18:05:11.597632  401374 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 18:05:11.611298  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 18:05:11.612131  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:05:11.612161  401374 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:05:11.626949  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:05:11.651615  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 18:05:11.652836  401374 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 18:05:11.652858  401374 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 18:05:11.683500  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 18:05:11.683529  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 18:05:11.697999  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 18:05:11.698025  401374 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 18:05:11.712898  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 18:05:11.712933  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 18:05:11.741586  401374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:05:11.741594  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:05:11.794714  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:05:11.794747  401374 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:05:11.825304  401374 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 18:05:11.825335  401374 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 18:05:11.874955  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:05:11.986198  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 18:05:11.986223  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 18:05:12.018851  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 18:05:12.018886  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 18:05:12.096189  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 18:05:12.096222  401374 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 18:05:12.125803  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:05:12.183177  401374 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 18:05:12.183221  401374 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 18:05:12.265334  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 18:05:12.322835  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 18:05:12.322865  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 18:05:12.332517  401374 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:12.332568  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 18:05:12.483477  401374 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 18:05:12.483510  401374 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 18:05:12.532675  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 18:05:12.532706  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 18:05:12.674145  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:12.825764  401374 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 18:05:12.825799  401374 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 18:05:12.856164  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 18:05:12.856192  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 18:05:13.115555  401374 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:05:13.115583  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 18:05:13.173886  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 18:05:13.173917  401374 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 18:05:13.308329  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:05:13.570969  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 18:05:13.571000  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 18:05:14.348971  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 18:05:14.348997  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 18:05:14.710476  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:05:14.710504  401374 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 18:05:15.037771  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:05:15.541773  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.201877979s)
	I0717 18:05:15.541842  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.541856  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.541772  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.272460789s)
	I0717 18:05:15.541926  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.541942  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542320  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542358  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542372  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.542379  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.542384  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.542391  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.542393  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542399  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542364  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:15.542674  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542692  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.543978  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:15.543993  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.544010  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:17.974281  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 18:05:17.974336  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:17.977506  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:17.977869  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:17.977909  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:17.978094  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:17.978296  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:17.978452  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:17.978587  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:18.415897  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 18:05:18.458521  401374 addons.go:234] Setting addon gcp-auth=true in "addons-453453"
	I0717 18:05:18.458594  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:18.458939  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:18.458978  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:18.476260  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I0717 18:05:18.476869  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:18.477447  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:18.477471  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:18.477881  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:18.478640  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:18.478676  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:18.494388  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0717 18:05:18.494873  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:18.495415  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:18.495447  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:18.495805  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:18.496028  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:18.497650  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:18.497930  401374 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 18:05:18.497965  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:18.500887  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:18.501284  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:18.501316  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:18.501424  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:18.501613  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:18.501783  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:18.501954  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:19.410063  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.043358433s)
	I0717 18:05:19.410138  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410161  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410157  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.98023887s)
	I0717 18:05:19.410242  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.868863165s)
	I0717 18:05:19.410285  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.79895578s)
	I0717 18:05:19.410286  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410353  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410372  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.78339412s)
	I0717 18:05:19.410403  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410414  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410292  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410430  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410322  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410475  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410476  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.410485  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.410485  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758839302s)
	I0717 18:05:19.410517  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410521  401374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.668826749s)
	I0717 18:05:19.410495  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.410543  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410543  401374 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 18:05:19.410552  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410582  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.535587402s)
	I0717 18:05:19.410604  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410614  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410719  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.284885594s)
	I0717 18:05:19.410736  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410744  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410818  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.145454016s)
	I0717 18:05:19.410833  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410846  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410980  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.736797412s)
	I0717 18:05:19.410528  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	W0717 18:05:19.411004  401374 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:05:19.411047  401374 retry.go:31] will retry after 168.566382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:05:19.411130  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.102770453s)
	I0717 18:05:19.411149  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411157  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411244  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411264  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411271  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411279  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411286  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411325  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411327  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411342  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411353  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411355  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411362  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411370  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411388  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411396  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411403  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411409  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411447  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411454  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411461  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411467  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410523  401374 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.668904904s)
	I0717 18:05:19.412608  401374 node_ready.go:35] waiting up to 6m0s for node "addons-453453" to be "Ready" ...
	I0717 18:05:19.412769  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.412803  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.412813  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.412824  401374 addons.go:475] Verifying addon ingress=true in "addons-453453"
	I0717 18:05:19.413092  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413131  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413140  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413168  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413189  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413201  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.413216  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.413289  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413300  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413306  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413309  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413328  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413336  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.415035  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.415086  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.415099  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.415323  401374 out.go:177] * Verifying ingress addon...
	I0717 18:05:19.416180  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416234  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416241  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416249  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416259  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416310  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411342  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416349  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416358  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416364  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416458  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416466  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416469  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416550  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416561  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416570  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416578  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416584  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416586  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416593  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416789  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416803  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416834  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416842  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.417281  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.417312  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.417326  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.417335  401374 addons.go:475] Verifying addon registry=true in "addons-453453"
	I0717 18:05:19.417627  401374 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 18:05:19.418377  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.418389  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.418398  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.418406  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.418469  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.418572  401374 out.go:177] * Verifying registry addon...
	I0717 18:05:19.419932  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.419959  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.419968  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.419977  401374 addons.go:475] Verifying addon metrics-server=true in "addons-453453"
	I0717 18:05:19.420657  401374 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-453453 service yakd-dashboard -n yakd-dashboard
	
	I0717 18:05:19.421001  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 18:05:19.438924  401374 node_ready.go:49] node "addons-453453" has status "Ready":"True"
	I0717 18:05:19.438949  401374 node_ready.go:38] duration metric: took 26.318434ms for node "addons-453453" to be "Ready" ...
	I0717 18:05:19.438959  401374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:05:19.448510  401374 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 18:05:19.448542  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:19.465568  401374 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 18:05:19.465588  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:19.503823  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.503860  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.504263  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.504288  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.504292  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	W0717 18:05:19.504406  401374 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 18:05:19.509572  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.509596  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.509948  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.509972  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.526402  401374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:19.580155  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:19.919639  401374 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-453453" context rescaled to 1 replicas
	I0717 18:05:19.922309  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:19.926451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:20.422789  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:20.425682  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:20.929933  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:20.935331  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:21.348119  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.310282234s)
	I0717 18:05:21.348198  401374 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.850237519s)
	I0717 18:05:21.348201  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.348349  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.348687  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.348750  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.348765  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.348763  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.348773  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.349063  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.349140  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.349157  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.349174  401374 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-453453"
	I0717 18:05:21.349549  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:21.350536  401374 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 18:05:21.351678  401374 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 18:05:21.352769  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 18:05:21.352872  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 18:05:21.352889  401374 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 18:05:21.382926  401374 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:05:21.382950  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:21.422004  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:21.426579  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:21.488468  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 18:05:21.488508  401374 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 18:05:21.531805  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:21.574650  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:05:21.574679  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 18:05:21.621027  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.040817998s)
	I0717 18:05:21.621094  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.621113  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.621455  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.621524  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.621547  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.621569  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.621584  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.621864  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.621903  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.621919  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.633791  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:05:21.858969  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:21.922683  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:21.925171  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:22.372735  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:22.434345  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:22.465238  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:22.642373  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.00854154s)
	I0717 18:05:22.642433  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:22.642451  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:22.642820  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:22.642873  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:22.642888  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:22.642913  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:22.643175  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:22.643199  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:22.643216  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:22.645232  401374 addons.go:475] Verifying addon gcp-auth=true in "addons-453453"
	I0717 18:05:22.647743  401374 out.go:177] * Verifying gcp-auth addon...
	I0717 18:05:22.649939  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 18:05:22.663063  401374 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 18:05:22.663082  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:22.858828  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:22.922078  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:22.928259  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:23.154174  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:23.442686  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:23.444436  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:23.445520  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:23.537801  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:23.664413  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:23.860751  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:23.923064  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:23.926006  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:24.153254  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:24.358577  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:24.422258  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:24.425438  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:24.654527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:24.858873  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:24.922474  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:24.925158  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:25.154439  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:25.358637  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:25.426154  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:25.428701  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:25.653842  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:25.859781  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:25.922525  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:25.925804  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:26.035619  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:26.154277  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:26.360922  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:26.423295  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:26.425835  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:26.653696  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:26.859650  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:26.923060  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:26.926556  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:27.155059  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:27.358355  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:27.422714  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:27.425920  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:27.654234  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:27.859310  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:27.922935  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:27.926079  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.035834  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:28.154292  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:28.358775  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:28.433538  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.438546  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:28.533095  401374 pod_ready.go:97] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.136 HostIPs:[{IP:192.168.39
.136}] PodIP: PodIPs:[] StartTime:2024-07-17 18:05:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-17 18:05:15 +0000 UTC,FinishedAt:2024-07-17 18:05:25 +0000 UTC,ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf Started:0xc001fa93d0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0717 18:05:28.533128  401374 pod_ready.go:81] duration metric: took 9.006702628s for pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace to be "Ready" ...
	E0717 18:05:28.533141  401374 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.136 HostIPs:[{IP:192.168.39.136}] PodIP: PodIPs:[] StartTime:2024-07-17 18:05:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-17 18:05:15 +0000 UTC,FinishedAt:2024-07-17 18:05:25 +0000 UTC,ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf Started:0xc001fa93d0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0717 18:05:28.533148  401374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.541737  401374 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.541757  401374 pod_ready.go:81] duration metric: took 8.601754ms for pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.541767  401374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.557006  401374 pod_ready.go:92] pod "etcd-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.557029  401374 pod_ready.go:81] duration metric: took 15.255712ms for pod "etcd-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.557038  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.561798  401374 pod_ready.go:92] pod "kube-apiserver-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.561816  401374 pod_ready.go:81] duration metric: took 4.772194ms for pod "kube-apiserver-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.561825  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.569525  401374 pod_ready.go:92] pod "kube-controller-manager-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.569545  401374 pod_ready.go:81] duration metric: took 7.713728ms for pod "kube-controller-manager-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.569558  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45g92" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.653707  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:28.858941  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:28.922146  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:28.925527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.931064  401374 pod_ready.go:92] pod "kube-proxy-45g92" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.931093  401374 pod_ready.go:81] duration metric: took 361.527965ms for pod "kube-proxy-45g92" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.931106  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.154757  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:29.330572  401374 pod_ready.go:92] pod "kube-scheduler-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:29.330601  401374 pod_ready.go:81] duration metric: took 399.485702ms for pod "kube-scheduler-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.330615  401374 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.366414  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:29.426503  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:29.428580  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:29.654435  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:29.859182  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:29.922684  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:29.925353  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:30.156271  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:30.357484  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:30.421984  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:30.424441  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:30.653995  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:30.859102  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:30.922686  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:30.926128  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:31.156241  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:31.337259  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:31.358012  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:31.427365  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:31.427783  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:31.653616  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:31.860922  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:31.922167  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:31.925222  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.153054  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:32.358406  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:32.422751  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:32.425936  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.654938  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:32.935065  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:32.935622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.936920  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.155024  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:33.359113  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:33.422388  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.425475  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:33.653656  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:33.839508  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:33.858375  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:33.923056  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.926454  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:34.153606  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:34.358918  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:34.422205  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:34.425494  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:34.653941  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:34.858643  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:34.922990  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:34.925191  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:35.153317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:35.359936  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:35.423119  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:35.425944  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:35.654971  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:35.865131  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:35.923327  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:35.926270  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:36.154203  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:36.337780  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:36.358954  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:36.422614  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:36.425569  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:36.653900  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:36.861597  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:36.922654  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:36.925154  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:37.154501  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:37.358148  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:37.423273  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:37.426059  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:37.656181  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:37.859214  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:37.922982  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:37.933772  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:38.154223  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:38.357850  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:38.422211  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:38.425247  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:38.653446  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:38.836798  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:38.858831  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:38.921951  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:38.925261  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:39.153197  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:39.358702  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:39.424347  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:39.426412  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:39.654063  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:39.858218  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:39.922154  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:39.924684  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:40.157786  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:40.359675  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:40.422008  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:40.426294  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:40.653978  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:40.858272  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:40.921926  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:40.924591  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:41.153691  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:41.338611  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:41.359422  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:41.421421  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:41.424793  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:41.653687  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:41.858993  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:41.922314  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:41.924631  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:42.156019  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:42.359014  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:42.421676  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:42.425631  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:42.653439  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:42.858328  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:42.922566  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:42.925285  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:43.616563  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:43.616655  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:43.617008  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:43.618089  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:43.619509  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:43.653753  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:43.858635  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:43.921919  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:43.924867  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:44.154715  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:44.360708  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:44.422017  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:44.424790  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:44.653860  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:44.858478  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:44.923159  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:44.926129  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:45.153830  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:45.358634  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:45.422405  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:45.426278  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:45.654803  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:45.837439  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:45.857646  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:45.924288  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:45.927627  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:46.154440  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:46.358742  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:46.422927  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:46.426622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:46.654192  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:46.858451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:46.921628  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:46.926233  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:47.154716  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:47.359244  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:47.422266  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:47.425098  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:47.654146  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:47.838711  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:47.858622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:47.923152  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:47.926888  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:48.154095  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:48.358095  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:48.422717  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:48.428772  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:48.653880  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:48.859032  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:48.921505  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:48.925164  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:49.154289  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:49.359374  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:49.422558  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:49.425355  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:49.653824  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:49.839244  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:49.861373  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:49.929626  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:49.930376  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:50.153619  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:50.358499  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:50.422399  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:50.425174  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:50.654315  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:50.859283  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:50.921618  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:50.925586  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:51.157434  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:51.358550  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:51.422449  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:51.425305  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:51.659122  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:51.863027  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:51.922486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:51.926207  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:52.154233  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:52.337089  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:52.358653  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:52.422178  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:52.425431  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:52.653534  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:52.858403  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:52.923532  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:52.926270  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:53.153512  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:53.359024  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:53.422613  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:53.425240  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:53.653482  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:53.858470  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:53.921294  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:53.924586  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:54.153679  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:54.337771  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:54.358525  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:54.427244  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:54.434561  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:54.653481  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:54.862390  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:54.923150  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:54.927800  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.154453  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:55.358857  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:55.421519  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:55.424743  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.653638  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:55.971461  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.971652  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:55.973673  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.155394  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:56.359441  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.421235  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:56.425187  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:56.654466  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:56.838382  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:56.860921  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.921671  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:56.925359  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:57.153106  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:57.358541  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:57.421696  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:57.425836  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:57.653562  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:57.858291  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:57.922402  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:57.925140  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:58.333161  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:58.358297  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:58.423015  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:58.425665  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:58.654078  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:58.858970  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:58.922436  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:58.925053  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:59.154034  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:59.336368  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:59.358351  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:59.422362  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:59.434020  401374 kapi.go:107] duration metric: took 40.013013684s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 18:05:59.654810  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:59.858732  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:59.922486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:00.153542  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:00.370121  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:00.421860  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:00.653872  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:00.858338  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:00.922660  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:01.154341  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:01.337063  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:01.363806  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:01.421963  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:01.654788  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:01.858362  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:01.925345  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:02.154279  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:02.358154  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:02.423012  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:02.654218  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:02.858521  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:02.923180  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:03.153997  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:03.358445  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:03.422270  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:03.654399  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:03.837370  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:03.860612  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:03.922428  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:04.158667  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:04.858163  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:04.858508  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:04.862503  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:04.867765  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:04.921654  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:05.153890  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:05.363473  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:05.425187  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:05.653905  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:05.837435  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:05.858451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:05.922094  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:06.154165  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:06.360750  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:06.426124  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:06.653752  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:06.859838  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:06.922080  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:07.154870  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:07.366847  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:07.422034  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:07.654393  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:07.857630  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:07.921789  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:08.153841  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:08.337432  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:08.360166  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:08.422960  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:08.653815  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:08.858972  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:08.921754  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:09.153426  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:09.363444  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:09.422525  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:09.654275  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:09.863909  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:09.922056  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:10.154832  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:10.358056  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:10.422491  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:10.653339  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:10.836657  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:10.858639  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:10.922435  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:11.155166  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:11.357487  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:11.422076  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:11.653782  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:11.859723  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:11.923244  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:12.154667  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:12.361256  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:12.422665  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:12.653176  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:12.848194  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:12.883554  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.389224  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:13.389302  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.393590  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:13.422089  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:13.654159  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:13.858594  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.922236  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:14.154257  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:14.357899  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:14.422932  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:14.654235  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:14.857482  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:14.921818  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:15.153599  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:15.336356  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:15.357673  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:15.421461  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:15.654328  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:15.857881  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:15.923255  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:16.154562  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:16.363483  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:16.424898  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:16.654528  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:16.859263  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:16.921875  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:17.154474  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:17.336462  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:17.357719  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:17.421484  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:17.653298  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:17.859168  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:17.922781  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:18.153632  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:18.358581  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:18.421949  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:18.655700  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:18.858122  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:18.921783  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:19.153360  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:19.338471  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:19.357878  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:19.421541  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:19.653396  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:19.862335  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:19.922920  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:20.153714  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:20.361870  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:20.425166  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:20.654761  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:20.857816  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:20.922063  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.153635  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:21.357339  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:21.425356  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.653540  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:21.955279  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.957006  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:21.961758  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:22.154769  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:22.357988  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:22.421935  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:22.653734  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:22.857372  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:22.923175  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:23.154321  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:23.357375  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:23.422581  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:23.653600  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:23.868454  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:23.922877  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:24.153915  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:24.340419  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:24.357828  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:24.422360  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:24.654531  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:24.868726  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:24.932146  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:25.153918  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:25.358317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:25.422479  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:25.656301  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:25.858358  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:25.926497  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:26.154805  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:26.357968  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:26.421939  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:26.654531  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.134724  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:27.135042  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.138907  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:27.164035  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.358481  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.421486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:27.654011  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.858712  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.922531  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:28.155575  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:28.360205  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:28.437976  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:28.654330  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:28.858382  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:28.922619  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:29.153407  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:29.396369  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:29.397864  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:29.421777  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:29.665000  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:29.857317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:29.922257  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:30.154333  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:30.359417  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:30.422069  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:30.653823  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:30.858127  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:30.921385  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:31.154170  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:31.357527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:31.421283  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:31.654181  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:31.848523  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:31.861173  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:31.922325  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:32.154484  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:32.358303  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:32.422566  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:32.653302  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:32.857723  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:32.922404  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:33.154335  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:33.358290  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:33.422245  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:33.653686  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:33.857951  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:33.921882  401374 kapi.go:107] duration metric: took 1m14.504254462s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 18:06:34.153659  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:34.337858  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:34.358464  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:34.653373  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:34.859889  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:35.153978  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:35.358827  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:35.653560  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:35.858083  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:36.154525  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:36.358514  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:36.654153  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:36.836208  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:36.857358  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.154449  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:37.338492  401374 pod_ready.go:92] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"True"
	I0717 18:06:37.338526  401374 pod_ready.go:81] duration metric: took 1m8.007903343s for pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.338541  401374 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.345777  401374 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace has status "Ready":"True"
	I0717 18:06:37.345801  401374 pod_ready.go:81] duration metric: took 7.25164ms for pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.345826  401374 pod_ready.go:38] duration metric: took 1m17.906855494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:06:37.345860  401374 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:06:37.345895  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:06:37.345958  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:06:37.359663  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.467268  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:37.467293  401374 cri.go:89] found id: ""
	I0717 18:06:37.467302  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:06:37.467361  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.474729  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:06:37.474803  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:06:37.545523  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:37.545648  401374 cri.go:89] found id: ""
	I0717 18:06:37.545707  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:06:37.545774  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.553547  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:06:37.553631  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:06:37.609478  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:37.609505  401374 cri.go:89] found id: ""
	I0717 18:06:37.609515  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:06:37.609576  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.614797  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:06:37.614874  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:06:37.653467  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:37.684402  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:37.684430  401374 cri.go:89] found id: ""
	I0717 18:06:37.684439  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:06:37.684511  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.695308  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:06:37.695397  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:06:37.741251  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:37.741285  401374 cri.go:89] found id: ""
	I0717 18:06:37.741295  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:06:37.741351  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.748077  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:06:37.748151  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:06:37.842345  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:37.842369  401374 cri.go:89] found id: ""
	I0717 18:06:37.842378  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:06:37.842445  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.851317  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:06:37.851397  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:06:37.864948  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.939382  401374 cri.go:89] found id: ""
	I0717 18:06:37.939408  401374 logs.go:276] 0 containers: []
	W0717 18:06:37.939418  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:06:37.939429  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:06:37.939449  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:06:38.102474  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:06:38.102503  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:38.153765  401374 kapi.go:107] duration metric: took 1m15.50381932s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 18:06:38.155699  401374 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-453453 cluster.
	I0717 18:06:38.157029  401374 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 18:06:38.158346  401374 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 18:06:38.203281  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:06:38.203313  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:38.306026  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:06:38.306065  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:38.359053  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:38.406973  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:06:38.407015  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:38.510738  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:06:38.510773  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:38.567448  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:06:38.567488  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:06:38.859403  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:38.919365  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:06:38.919402  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:06:38.998768  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:38.999024  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:39.029480  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:06:39.029527  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:39.126133  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:06:39.126183  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:06:39.249805  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:06:39.249853  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:06:39.279848  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:39.279880  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:06:39.279948  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:06:39.279966  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:39.279986  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:39.279999  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:39.280008  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:06:39.359511  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:39.861150  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:40.358492  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:40.861662  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:41.358067  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:41.986710  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:42.361146  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:42.858215  401374 kapi.go:107] duration metric: took 1m21.505444005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 18:06:42.859950  401374 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 18:06:42.861320  401374 addons.go:510] duration metric: took 1m32.110951894s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 18:06:49.280741  401374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:06:49.300712  401374 api_server.go:72] duration metric: took 1m38.550379192s to wait for apiserver process to appear ...
	I0717 18:06:49.300753  401374 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:06:49.300802  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:06:49.300871  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:06:49.342717  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:49.342739  401374 cri.go:89] found id: ""
	I0717 18:06:49.342748  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:06:49.342815  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.346989  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:06:49.347046  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:06:49.389991  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:49.390018  401374 cri.go:89] found id: ""
	I0717 18:06:49.390026  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:06:49.390079  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.394539  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:06:49.394611  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:06:49.431648  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:49.431679  401374 cri.go:89] found id: ""
	I0717 18:06:49.431691  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:06:49.431754  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.436288  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:06:49.436358  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:06:49.482361  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:49.482392  401374 cri.go:89] found id: ""
	I0717 18:06:49.482403  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:06:49.482469  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.491021  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:06:49.491105  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:06:49.540530  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:49.540564  401374 cri.go:89] found id: ""
	I0717 18:06:49.540576  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:06:49.540638  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.545021  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:06:49.545083  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:06:49.587754  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:49.587777  401374 cri.go:89] found id: ""
	I0717 18:06:49.587788  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:06:49.587839  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.592068  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:06:49.592133  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:06:49.630719  401374 cri.go:89] found id: ""
	I0717 18:06:49.630751  401374 logs.go:276] 0 containers: []
	W0717 18:06:49.630763  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:06:49.630775  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:06:49.630793  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:06:49.686998  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:49.687177  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:49.712837  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:06:49.712880  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:06:49.729337  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:06:49.729371  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:49.787944  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:06:49.787979  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:49.848075  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:06:49.848112  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:49.910656  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:06:49.910691  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:06:50.044022  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:06:50.044054  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:50.095111  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:06:50.095146  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:50.134636  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:06:50.134673  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:50.173045  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:06:50.173074  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:06:51.021034  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:06:51.021092  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:06:51.076250  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:51.076290  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:06:51.076357  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:06:51.076374  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:51.076386  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:51.076397  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:51.076403  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:07:01.077317  401374 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I0717 18:07:01.081975  401374 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I0717 18:07:01.083108  401374 api_server.go:141] control plane version: v1.30.2
	I0717 18:07:01.083131  401374 api_server.go:131] duration metric: took 11.782371865s to wait for apiserver health ...
	I0717 18:07:01.083140  401374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:07:01.083162  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:07:01.083211  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:07:01.124610  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:07:01.124651  401374 cri.go:89] found id: ""
	I0717 18:07:01.124662  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:07:01.124732  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.130070  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:07:01.130137  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:07:01.176369  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:07:01.176401  401374 cri.go:89] found id: ""
	I0717 18:07:01.176410  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:07:01.176473  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.181519  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:07:01.181598  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:07:01.221814  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:07:01.221842  401374 cri.go:89] found id: ""
	I0717 18:07:01.221852  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:07:01.221921  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.226065  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:07:01.226129  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:07:01.265271  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:07:01.265296  401374 cri.go:89] found id: ""
	I0717 18:07:01.265307  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:07:01.265366  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.269699  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:07:01.269762  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:07:01.307878  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:07:01.307914  401374 cri.go:89] found id: ""
	I0717 18:07:01.307924  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:07:01.307994  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.312097  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:07:01.312159  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:07:01.351048  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:07:01.351079  401374 cri.go:89] found id: ""
	I0717 18:07:01.351091  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:07:01.351154  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.355191  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:07:01.355271  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:07:01.404615  401374 cri.go:89] found id: ""
	I0717 18:07:01.404648  401374 logs.go:276] 0 containers: []
	W0717 18:07:01.404657  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:07:01.404667  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:07:01.404683  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:07:01.460531  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:07:01.460568  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:07:01.508356  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:07:01.508400  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:07:01.550988  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:07:01.551021  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:07:01.612085  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:07:01.612130  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:07:02.595712  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:07:02.595776  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:07:02.650570  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:07:02.650624  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:07:02.703982  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:07:02.704161  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:07:02.731032  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:07:02.731076  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:07:02.746793  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:07:02.746831  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:07:02.882527  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:07:02.882584  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:07:02.940182  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:07:02.940235  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:07:02.979974  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:07:02.980010  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:07:02.980077  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:07:02.980089  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:07:02.980099  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:07:02.980109  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:07:02.980115  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:07:12.990067  401374 system_pods.go:59] 18 kube-system pods found
	I0717 18:07:12.990100  401374 system_pods.go:61] "coredns-7db6d8ff4d-wpzc7" [31ed1339-07ca-4d41-a32f-3a2b203555e1] Running
	I0717 18:07:12.990105  401374 system_pods.go:61] "csi-hostpath-attacher-0" [97417d9d-ca84-4bf4-abc2-41be2734c7ac] Running
	I0717 18:07:12.990108  401374 system_pods.go:61] "csi-hostpath-resizer-0" [943ebd63-ae02-4da1-9fec-0714f480d246] Running
	I0717 18:07:12.990112  401374 system_pods.go:61] "csi-hostpathplugin-fbs7w" [95d2c04d-7e7a-42eb-950b-e156bb27b489] Running
	I0717 18:07:12.990115  401374 system_pods.go:61] "etcd-addons-453453" [d6799cd0-a3dd-4395-9423-eff734cbe921] Running
	I0717 18:07:12.990118  401374 system_pods.go:61] "kube-apiserver-addons-453453" [912afe9d-e769-41b0-80a2-f4a3e649311a] Running
	I0717 18:07:12.990122  401374 system_pods.go:61] "kube-controller-manager-addons-453453" [0083af6a-7f5f-439c-9dde-13f8e0bf3476] Running
	I0717 18:07:12.990125  401374 system_pods.go:61] "kube-ingress-dns-minikube" [62d0dcb4-1d9b-4177-b580-84291702a582] Running
	I0717 18:07:12.990128  401374 system_pods.go:61] "kube-proxy-45g92" [287c805c-5dbe-4f01-8153-dcf0424c2edc] Running
	I0717 18:07:12.990130  401374 system_pods.go:61] "kube-scheduler-addons-453453" [9177b89e-81eb-4f2d-a7ed-46e7a240e284] Running
	I0717 18:07:12.990135  401374 system_pods.go:61] "metrics-server-c59844bb4-5m4fv" [886d3903-d44e-489c-bf8d-be11494d150b] Running
	I0717 18:07:12.990138  401374 system_pods.go:61] "nvidia-device-plugin-daemonset-h5kz7" [b8017821-48d3-427f-87a1-64e210b8ca26] Running
	I0717 18:07:12.990141  401374 system_pods.go:61] "registry-656c9c8d9c-mdcds" [2aea3a0e-bf77-437f-ada1-99cf0afc991d] Running
	I0717 18:07:12.990144  401374 system_pods.go:61] "registry-proxy-bvkbp" [ee546b39-8d72-4a83-b1f0-5d08d5ba2998] Running
	I0717 18:07:12.990146  401374 system_pods.go:61] "snapshot-controller-745499f584-dpztl" [bcd0cc4c-df7f-4853-8d86-34efa6b7ee6b] Running
	I0717 18:07:12.990150  401374 system_pods.go:61] "snapshot-controller-745499f584-n7cpl" [e8c47f09-2db9-4af5-969d-38ebec140574] Running
	I0717 18:07:12.990153  401374 system_pods.go:61] "storage-provisioner" [eb1c997d-8a91-402e-aabd-c19ce8771f6e] Running
	I0717 18:07:12.990155  401374 system_pods.go:61] "tiller-deploy-6677d64bcd-g4wtr" [05df6af2-4add-4e71-b8e0-eb055c2f28cc] Running
	I0717 18:07:12.990162  401374 system_pods.go:74] duration metric: took 11.907016272s to wait for pod list to return data ...
	I0717 18:07:12.990175  401374 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:07:12.992762  401374 default_sa.go:45] found service account: "default"
	I0717 18:07:12.992783  401374 default_sa.go:55] duration metric: took 2.602582ms for default service account to be created ...
	I0717 18:07:12.992789  401374 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:07:13.001781  401374 system_pods.go:86] 18 kube-system pods found
	I0717 18:07:13.001809  401374 system_pods.go:89] "coredns-7db6d8ff4d-wpzc7" [31ed1339-07ca-4d41-a32f-3a2b203555e1] Running
	I0717 18:07:13.001814  401374 system_pods.go:89] "csi-hostpath-attacher-0" [97417d9d-ca84-4bf4-abc2-41be2734c7ac] Running
	I0717 18:07:13.001819  401374 system_pods.go:89] "csi-hostpath-resizer-0" [943ebd63-ae02-4da1-9fec-0714f480d246] Running
	I0717 18:07:13.001823  401374 system_pods.go:89] "csi-hostpathplugin-fbs7w" [95d2c04d-7e7a-42eb-950b-e156bb27b489] Running
	I0717 18:07:13.001827  401374 system_pods.go:89] "etcd-addons-453453" [d6799cd0-a3dd-4395-9423-eff734cbe921] Running
	I0717 18:07:13.001831  401374 system_pods.go:89] "kube-apiserver-addons-453453" [912afe9d-e769-41b0-80a2-f4a3e649311a] Running
	I0717 18:07:13.001836  401374 system_pods.go:89] "kube-controller-manager-addons-453453" [0083af6a-7f5f-439c-9dde-13f8e0bf3476] Running
	I0717 18:07:13.001841  401374 system_pods.go:89] "kube-ingress-dns-minikube" [62d0dcb4-1d9b-4177-b580-84291702a582] Running
	I0717 18:07:13.001845  401374 system_pods.go:89] "kube-proxy-45g92" [287c805c-5dbe-4f01-8153-dcf0424c2edc] Running
	I0717 18:07:13.001850  401374 system_pods.go:89] "kube-scheduler-addons-453453" [9177b89e-81eb-4f2d-a7ed-46e7a240e284] Running
	I0717 18:07:13.001857  401374 system_pods.go:89] "metrics-server-c59844bb4-5m4fv" [886d3903-d44e-489c-bf8d-be11494d150b] Running
	I0717 18:07:13.001861  401374 system_pods.go:89] "nvidia-device-plugin-daemonset-h5kz7" [b8017821-48d3-427f-87a1-64e210b8ca26] Running
	I0717 18:07:13.001865  401374 system_pods.go:89] "registry-656c9c8d9c-mdcds" [2aea3a0e-bf77-437f-ada1-99cf0afc991d] Running
	I0717 18:07:13.001870  401374 system_pods.go:89] "registry-proxy-bvkbp" [ee546b39-8d72-4a83-b1f0-5d08d5ba2998] Running
	I0717 18:07:13.001874  401374 system_pods.go:89] "snapshot-controller-745499f584-dpztl" [bcd0cc4c-df7f-4853-8d86-34efa6b7ee6b] Running
	I0717 18:07:13.001880  401374 system_pods.go:89] "snapshot-controller-745499f584-n7cpl" [e8c47f09-2db9-4af5-969d-38ebec140574] Running
	I0717 18:07:13.001883  401374 system_pods.go:89] "storage-provisioner" [eb1c997d-8a91-402e-aabd-c19ce8771f6e] Running
	I0717 18:07:13.001887  401374 system_pods.go:89] "tiller-deploy-6677d64bcd-g4wtr" [05df6af2-4add-4e71-b8e0-eb055c2f28cc] Running
	I0717 18:07:13.001893  401374 system_pods.go:126] duration metric: took 9.098881ms to wait for k8s-apps to be running ...
	I0717 18:07:13.001906  401374 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:07:13.001956  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:07:13.019010  401374 system_svc.go:56] duration metric: took 17.095697ms WaitForService to wait for kubelet
	I0717 18:07:13.019047  401374 kubeadm.go:582] duration metric: took 2m2.268722577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:07:13.019077  401374 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:07:13.022619  401374 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:07:13.022645  401374 node_conditions.go:123] node cpu capacity is 2
	I0717 18:07:13.022658  401374 node_conditions.go:105] duration metric: took 3.575525ms to run NodePressure ...
	I0717 18:07:13.022671  401374 start.go:241] waiting for startup goroutines ...
	I0717 18:07:13.022680  401374 start.go:246] waiting for cluster config update ...
	I0717 18:07:13.022702  401374 start.go:255] writing updated cluster config ...
	I0717 18:07:13.023036  401374 ssh_runner.go:195] Run: rm -f paused
	I0717 18:07:13.076888  401374 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:07:13.079339  401374 out.go:177] * Done! kubectl is now configured to use "addons-453453" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.727624617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6928acc0-3914-4389-ba76-84b16609cc5a name=/runtime.v1.RuntimeService/Version
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.728852757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df4d017f-7f35-41f6-8df5-0ac23047d07a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.730093009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239819730062139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df4d017f-7f35-41f6-8df5-0ac23047d07a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.730870885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbf33330-8d92-4eff-a5fe-f1238259eb42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.731068111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbf33330-8d92-4eff-a5fe-f1238259eb42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.731473010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe10994e3ef10113722afb027166ae7c7fd120e44220f3baf1465d3ad46cfa7,PodSandboxId:7cea6ed4112c93b7723c12f5dd7d5465c7f3a7d39c64777aab8e6d4dacd8bc86,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721239579724069259,Labels:map[string]st
ring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6sqz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b79200b8-349b-4aaa-b7fd-ec6030c13900,},Annotations:map[string]string{io.kubernetes.container.hash: fb66ab1c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f3d361df604d29a762cdbf9eaddd32323ae5e12b4251aec829f29894647d049,PodSandboxId:bddbb022ec5b0ec1ea347b9bee1c3247d1b0612164436e7038b21a9e9acc0c90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721
239578775746288,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97fxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05308e26-c541-463f-b368-552cc3c07fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 817e5523,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Crea
tedAt:1721239576440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db
3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e
48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d0094
2f3,PodSandboxId:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb
017003454d315bfc1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332
583a9cfe2570940d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbf33330-8d92-4eff-a5fe-f1238259eb42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.767249966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e0408b7-0d4d-4a4c-94b8-e6712e2e2c62 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.767347557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e0408b7-0d4d-4a4c-94b8-e6712e2e2c62 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.769001563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec72aaa0-d6e2-4207-b0dc-a12a76bfe783 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.770418470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239819770388809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec72aaa0-d6e2-4207-b0dc-a12a76bfe783 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.770976187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb9e8651-9d05-4f77-ae93-e33acc15d51c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.771034325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb9e8651-9d05-4f77-ae93-e33acc15d51c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.771414876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe10994e3ef10113722afb027166ae7c7fd120e44220f3baf1465d3ad46cfa7,PodSandboxId:7cea6ed4112c93b7723c12f5dd7d5465c7f3a7d39c64777aab8e6d4dacd8bc86,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721239579724069259,Labels:map[string]st
ring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6sqz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b79200b8-349b-4aaa-b7fd-ec6030c13900,},Annotations:map[string]string{io.kubernetes.container.hash: fb66ab1c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f3d361df604d29a762cdbf9eaddd32323ae5e12b4251aec829f29894647d049,PodSandboxId:bddbb022ec5b0ec1ea347b9bee1c3247d1b0612164436e7038b21a9e9acc0c90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721
239578775746288,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97fxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05308e26-c541-463f-b368-552cc3c07fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 817e5523,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Crea
tedAt:1721239576440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db
3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e
48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d0094
2f3,PodSandboxId:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb
017003454d315bfc1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332
583a9cfe2570940d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb9e8651-9d05-4f77-ae93-e33acc15d51c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.797853185Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11e1d863-6aec-4737-8698-956af0925d44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.798209416Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-6bfmd,Uid:9b273295-d4f1-43aa-b0ef-d148763f6593,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239809991959349,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:10:09.682567965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&PodSandboxMetadata{Name:nginx,Uid:6918b754-82dd-4b43-acdd-204f3a8419d3,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1721239658605596693,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:07:38.287072080Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&PodSandboxMetadata{Name:headlamp-7867546754-29grz,Uid:b89e8f1b-24a4-46f3-b300-72f6c803f7d6,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239634380239621,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,pod-template-hash: 7867546754,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
07-17T18:07:14.038854020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-7d9fn,Uid:f4619892-e5ff-45a2-b2d8-001fba539eb6,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239586837415170,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:22.619984001Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-799879c74f-rzt74,Uid:d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1721239519173863580,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,pod-template-hash: 799879c74f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:17.364020086Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-5m4fv,Uid:886d3903-d44e-489c-bf8d-be11494d150b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239517277671953,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:16.935381396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:eb1c997d-8a91-402e-aabd-c19ce8771f6e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239515906346716,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T18:05:15.562034539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wpzc7,Uid:31ed1339-07ca-4d41-a32f-3a2b203555e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239511586537703,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b20
3555e1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:11.205032932Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&PodSandboxMetadata{Name:kube-proxy-45g92,Uid:287c805c-5dbe-4f01-8153-dcf0424c2edc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239511564180928,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:10.909396775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940d4089f2113fd3aa,Metadata:&PodSandboxMetadata{Name:etcd-addons-453453,Uid:02c2e
7f10676afa5f4ef1ebec7d4216c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491918222610,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.136:2379,kubernetes.io/config.hash: 02c2e7f10676afa5f4ef1ebec7d4216c,kubernetes.io/config.seen: 2024-07-17T18:04:51.470029619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3d2313a576fee8fb017003454d315bfc1d51b4f459a62148be66f22872180bc1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-453453,Uid:afda81ae2740d330017a46f45930e6fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491914725453,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-
453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: afda81ae2740d330017a46f45930e6fe,kubernetes.io/config.seen: 2024-07-17T18:04:51.470028754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-453453,Uid:2e877c929903859d77ada01f09fc28ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491910581387,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2e877c929903859d77ada01f09fc28ad,kubernetes.io/config.seen: 2024-07-17T18:04:51.470027750Z,
kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-453453,Uid:6d8e8a77892a04a0ceea7caff40574ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491910032408,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.136:8443,kubernetes.io/config.hash: 6d8e8a77892a04a0ceea7caff40574ef,kubernetes.io/config.seen: 2024-07-17T18:04:51.470022829Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=11e1d863-6aec-4737-8698-956af0925d44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.800126623Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c682be44-fdce-4b09-89b5-4a0b1bb657ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.800195050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c682be44-fdce-4b09-89b5-4a0b1bb657ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.800702959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:172123957
6440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c88
72c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3,PodSandboxI
d:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb017003454d315bf
c1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940
d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:kube-controller
-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c682be44-fdce-4b09-89b5-4a0b1bb657ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.814281714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac23e931-f2cf-4e60-8421-bc441efe89e3 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.814578099Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac23e931-f2cf-4e60-8421-bc441efe89e3 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.816014448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d540550-e805-46ea-833a-6378516d1736 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.817313285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239819817285676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d540550-e805-46ea-833a-6378516d1736 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.818397934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fef2176c-8295-4a80-bcff-5af0be0bf941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.818473603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fef2176c-8295-4a80-bcff-5af0be0bf941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:10:19 addons-453453 crio[679]: time="2024-07-17 18:10:19.819048930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afe10994e3ef10113722afb027166ae7c7fd120e44220f3baf1465d3ad46cfa7,PodSandboxId:7cea6ed4112c93b7723c12f5dd7d5465c7f3a7d39c64777aab8e6d4dacd8bc86,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721239579724069259,Labels:map[string]st
ring{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6sqz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b79200b8-349b-4aaa-b7fd-ec6030c13900,},Annotations:map[string]string{io.kubernetes.container.hash: fb66ab1c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f3d361df604d29a762cdbf9eaddd32323ae5e12b4251aec829f29894647d049,PodSandboxId:bddbb022ec5b0ec1ea347b9bee1c3247d1b0612164436e7038b21a9e9acc0c90,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721
239578775746288,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-97fxf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 05308e26-c541-463f-b368-552cc3c07fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 817e5523,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Crea
tedAt:1721239576440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db
3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e
48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d0094
2f3,PodSandboxId:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb
017003454d315bfc1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332
583a9cfe2570940d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:
kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fef2176c-8295-4a80-bcff-5af0be0bf941 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2918bb8f28eb4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   33eeb5aa7d898       hello-world-app-6778b5fc9f-6bfmd
	3d503fb774476       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   1582aebb9ca26       nginx
	07c1926654c95       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   f18424687dfa0       headlamp-7867546754-29grz
	5ab3688fd15de       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   63ff63a9bff7c       gcp-auth-5db96cd9b4-7d9fn
	afe10994e3ef1       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   7cea6ed4112c9       ingress-nginx-admission-patch-r6sqz
	6f3d361df604d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   bddbb022ec5b0       ingress-nginx-admission-create-97fxf
	1dd1d69251f6e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              4 minutes ago       Running             yakd                      0                   63d16d4983e01       yakd-dashboard-799879c74f-rzt74
	146717e5268df       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   a668b50ae04ad       metrics-server-c59844bb4-5m4fv
	3bfb23522a04a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   3e48d5f320a76       storage-provisioner
	d45bcf1eb6bad       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   f614224ac1b46       coredns-7db6d8ff4d-wpzc7
	2fb69b3eff0c8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             5 minutes ago       Running             kube-proxy                0                   aeff920decc5b       kube-proxy-45g92
	259069889e9e8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             5 minutes ago       Running             kube-scheduler            0                   3d2313a576fee       kube-scheduler-addons-453453
	b698fb331680e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   921f3b320d6a0       etcd-addons-453453
	b0a42f1bfe6fa       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             5 minutes ago       Running             kube-apiserver            0                   25fc7e6805bbe       kube-apiserver-addons-453453
	35a820bcebd02       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             5 minutes ago       Running             kube-controller-manager   0                   f5dc0e184131d       kube-controller-manager-addons-453453
	
	
	==> coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] <==
	[INFO] 10.244.0.7:40475 - 33247 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125668s
	[INFO] 10.244.0.7:52645 - 5955 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080304s
	[INFO] 10.244.0.7:52645 - 16449 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160659s
	[INFO] 10.244.0.7:50139 - 52436 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067486s
	[INFO] 10.244.0.7:50139 - 5077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050159s
	[INFO] 10.244.0.7:55040 - 80 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007855s
	[INFO] 10.244.0.7:55040 - 59734 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048133s
	[INFO] 10.244.0.7:39990 - 42144 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000184508s
	[INFO] 10.244.0.7:39990 - 24482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097731s
	[INFO] 10.244.0.7:45949 - 7974 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011024s
	[INFO] 10.244.0.7:45949 - 44068 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001625718s
	[INFO] 10.244.0.7:34057 - 24261 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077898s
	[INFO] 10.244.0.7:34057 - 55482 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031736s
	[INFO] 10.244.0.7:46820 - 64246 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066173s
	[INFO] 10.244.0.7:46820 - 53744 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009971s
	[INFO] 10.244.0.22:60535 - 52797 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000347098s
	[INFO] 10.244.0.22:55677 - 45067 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090447s
	[INFO] 10.244.0.22:49722 - 20404 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082103s
	[INFO] 10.244.0.22:34998 - 11278 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056523s
	[INFO] 10.244.0.22:59034 - 50331 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000061046s
	[INFO] 10.244.0.22:58850 - 18232 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058025s
	[INFO] 10.244.0.22:50257 - 13348 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000437738s
	[INFO] 10.244.0.22:36407 - 56893 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000805971s
	[INFO] 10.244.0.25:51907 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000531651s
	[INFO] 10.244.0.25:56952 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146342s
	
	
	==> describe nodes <==
	Name:               addons-453453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-453453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=addons-453453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_04_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-453453
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-453453
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:10:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:08:01 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:08:01 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:08:01 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:08:01 +0000   Wed, 17 Jul 2024 18:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    addons-453453
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb6dc61fd889454e95ace36bd2204ff5
	  System UUID:                eb6dc61f-d889-454e-95ac-e36bd2204ff5
	  Boot ID:                    e850f4c2-d1e4-4c24-8b9f-0a02de591062
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-6bfmd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  gcp-auth                    gcp-auth-5db96cd9b4-7d9fn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  headlamp                    headlamp-7867546754-29grz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                 coredns-7db6d8ff4d-wpzc7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m9s
	  kube-system                 etcd-addons-453453                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-apiserver-addons-453453             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-addons-453453    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-45g92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-addons-453453             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 metrics-server-c59844bb4-5m4fv           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  yakd-dashboard              yakd-dashboard-799879c74f-rzt74          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  Starting                 5m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node addons-453453 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node addons-453453 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node addons-453453 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m23s                  kubelet          Node addons-453453 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s                  kubelet          Node addons-453453 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s                  kubelet          Node addons-453453 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m22s                  kubelet          Node addons-453453 status is now: NodeReady
	  Normal  RegisteredNode           5m11s                  node-controller  Node addons-453453 event: Registered Node addons-453453 in Controller
	
	
	==> dmesg <==
	[Jul17 18:05] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.862494] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[  +5.158968] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.072495] kauditd_printk_skb: 136 callbacks suppressed
	[  +6.950898] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.981210] kauditd_printk_skb: 6 callbacks suppressed
	[ +25.113127] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 18:06] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.264345] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.107585] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.795056] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.028091] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.048063] kauditd_printk_skb: 7 callbacks suppressed
	[Jul17 18:07] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.244030] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.731581] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.022055] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.460118] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.331484] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.023422] kauditd_printk_skb: 8 callbacks suppressed
	[Jul17 18:08] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.444456] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.890358] kauditd_printk_skb: 41 callbacks suppressed
	[Jul17 18:10] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.092949] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] <==
	{"level":"warn","ts":"2024-07-17T18:06:27.110882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.889447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46\" ","response":"range_response_count:1 size:3634"}
	{"level":"info","ts":"2024-07-17T18:06:27.110931Z","caller":"traceutil/trace.go:171","msg":"trace[160697363] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46; range_end:; response_count:1; response_revision:1125; }","duration":"350.973692ms","start":"2024-07-17T18:06:26.759944Z","end":"2024-07-17T18:06:27.110918Z","steps":["trace[160697363] 'agreement among raft nodes before linearized reading'  (duration: 350.785965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.110971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:06:26.759932Z","time spent":"351.030953ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3656,"request content":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46\" "}
	{"level":"warn","ts":"2024-07-17T18:06:27.111104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.287972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-5m4fv\" ","response":"range_response_count:1 size:4461"}
	{"level":"info","ts":"2024-07-17T18:06:27.11114Z","caller":"traceutil/trace.go:171","msg":"trace[793819518] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-5m4fv; range_end:; response_count:1; response_revision:1125; }","duration":"285.401931ms","start":"2024-07-17T18:06:26.825732Z","end":"2024-07-17T18:06:27.111134Z","steps":["trace[793819518] 'agreement among raft nodes before linearized reading'  (duration: 285.332056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.111862Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.681921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-07-17T18:06:27.111907Z","caller":"traceutil/trace.go:171","msg":"trace[189773330] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1125; }","duration":"260.749293ms","start":"2024-07-17T18:06:26.851149Z","end":"2024-07-17T18:06:27.111899Z","steps":["trace[189773330] 'agreement among raft nodes before linearized reading'  (duration: 260.526867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.1152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.801845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-17T18:06:27.115242Z","caller":"traceutil/trace.go:171","msg":"trace[342097142] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1125; }","duration":"202.847967ms","start":"2024-07-17T18:06:26.912387Z","end":"2024-07-17T18:06:27.115235Z","steps":["trace[342097142] 'agreement among raft nodes before linearized reading'  (duration: 198.860277ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:06:29.656076Z","caller":"traceutil/trace.go:171","msg":"trace[1958085727] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"199.158334ms","start":"2024-07-17T18:06:29.456903Z","end":"2024-07-17T18:06:29.656061Z","steps":["trace[1958085727] 'process raft request'  (duration: 198.750552ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:06:41.972723Z","caller":"traceutil/trace.go:171","msg":"trace[1698292236] linearizableReadLoop","detail":"{readStateIndex:1249; appliedIndex:1248; }","duration":"237.331853ms","start":"2024-07-17T18:06:41.735364Z","end":"2024-07-17T18:06:41.972696Z","steps":["trace[1698292236] 'read index received'  (duration: 237.171461ms)","trace[1698292236] 'applied index is now lower than readState.Index'  (duration: 159.727µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:06:41.972862Z","caller":"traceutil/trace.go:171","msg":"trace[13708761] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"423.839222ms","start":"2024-07-17T18:06:41.549017Z","end":"2024-07-17T18:06:41.972856Z","steps":["trace[13708761] 'process raft request'  (duration: 423.534734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:06:41.549001Z","time spent":"423.944296ms","remote":"127.0.0.1:56896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1167 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-17T18:06:41.973207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.859096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T18:06:41.973251Z","caller":"traceutil/trace.go:171","msg":"trace[471786989] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1213; }","duration":"237.919666ms","start":"2024-07-17T18:06:41.735323Z","end":"2024-07-17T18:06:41.973243Z","steps":["trace[471786989] 'agreement among raft nodes before linearized reading'  (duration: 237.826974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.51641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-dd2tz.17e3123937aca04e\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2024-07-17T18:06:41.973471Z","caller":"traceutil/trace.go:171","msg":"trace[1903211114] range","detail":"{range_begin:/registry/events/gadget/gadget-dd2tz.17e3123937aca04e; range_end:; response_count:1; response_revision:1213; }","duration":"172.595156ms","start":"2024-07-17T18:06:41.800867Z","end":"2024-07-17T18:06:41.973462Z","steps":["trace[1903211114] 'agreement among raft nodes before linearized reading'  (duration: 172.476341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.772158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85460"}
	{"level":"info","ts":"2024-07-17T18:06:41.973666Z","caller":"traceutil/trace.go:171","msg":"trace[1968914310] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1213; }","duration":"125.828626ms","start":"2024-07-17T18:06:41.847821Z","end":"2024-07-17T18:06:41.973649Z","steps":["trace[1968914310] 'agreement among raft nodes before linearized reading'  (duration: 125.628559ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:07:39.372456Z","caller":"traceutil/trace.go:171","msg":"trace[1997254569] linearizableReadLoop","detail":"{readStateIndex:1591; appliedIndex:1590; }","duration":"270.673324ms","start":"2024-07-17T18:07:39.101743Z","end":"2024-07-17T18:07:39.372416Z","steps":["trace[1997254569] 'read index received'  (duration: 270.355179ms)","trace[1997254569] 'applied index is now lower than readState.Index'  (duration: 317.687µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:07:39.37286Z","caller":"traceutil/trace.go:171","msg":"trace[687251374] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"326.275507ms","start":"2024-07-17T18:07:39.046569Z","end":"2024-07-17T18:07:39.372844Z","steps":["trace[687251374] 'process raft request'  (duration: 325.56784ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:07:39.373029Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:07:39.046554Z","time spent":"326.36825ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3419,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" mod_revision:1535 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" value_size:3349 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" > >"}
	{"level":"warn","ts":"2024-07-17T18:07:39.374016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.268417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12969"}
	{"level":"info","ts":"2024-07-17T18:07:39.374813Z","caller":"traceutil/trace.go:171","msg":"trace[731522879] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1536; }","duration":"273.022005ms","start":"2024-07-17T18:07:39.101713Z","end":"2024-07-17T18:07:39.374735Z","steps":["trace[731522879] 'agreement among raft nodes before linearized reading'  (duration: 271.381086ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:07:49.360129Z","caller":"traceutil/trace.go:171","msg":"trace[1608737015] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"118.640204ms","start":"2024-07-17T18:07:49.241468Z","end":"2024-07-17T18:07:49.360108Z","steps":["trace[1608737015] 'process raft request'  (duration: 118.422953ms)"],"step_count":1}
	
	
	==> gcp-auth [5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8] <==
	2024/07/17 18:06:37 GCP Auth Webhook started!
	2024/07/17 18:07:13 Ready to marshal response ...
	2024/07/17 18:07:13 Ready to write response ...
	2024/07/17 18:07:13 Ready to marshal response ...
	2024/07/17 18:07:13 Ready to write response ...
	2024/07/17 18:07:14 Ready to marshal response ...
	2024/07/17 18:07:14 Ready to write response ...
	2024/07/17 18:07:24 Ready to marshal response ...
	2024/07/17 18:07:24 Ready to write response ...
	2024/07/17 18:07:24 Ready to marshal response ...
	2024/07/17 18:07:24 Ready to write response ...
	2024/07/17 18:07:30 Ready to marshal response ...
	2024/07/17 18:07:30 Ready to write response ...
	2024/07/17 18:07:30 Ready to marshal response ...
	2024/07/17 18:07:30 Ready to write response ...
	2024/07/17 18:07:36 Ready to marshal response ...
	2024/07/17 18:07:36 Ready to write response ...
	2024/07/17 18:07:38 Ready to marshal response ...
	2024/07/17 18:07:38 Ready to write response ...
	2024/07/17 18:07:49 Ready to marshal response ...
	2024/07/17 18:07:49 Ready to write response ...
	2024/07/17 18:08:13 Ready to marshal response ...
	2024/07/17 18:08:13 Ready to write response ...
	2024/07/17 18:10:09 Ready to marshal response ...
	2024/07/17 18:10:09 Ready to write response ...
	
	
	==> kernel <==
	 18:10:20 up 5 min,  0 users,  load average: 0.55, 1.02, 0.55
	Linux addons-453453 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] <==
	E0717 18:06:37.311323       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	E0717 18:06:37.312034       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	E0717 18:06:37.321902       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	I0717 18:06:37.425635       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 18:07:13.910086       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.77.54"}
	I0717 18:07:32.594045       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 18:07:33.638580       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 18:07:38.096693       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 18:07:38.345252       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.118.236"}
	I0717 18:07:52.163105       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 18:08:05.607490       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 18:08:29.172443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.172502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.206878       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.206948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.228930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.228979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.229719       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.229831       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.255268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.255318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 18:08:30.230189       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 18:08:30.256288       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 18:08:30.270879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 18:10:09.835299       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.41.106"}
	
	
	==> kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] <==
	E0717 18:08:49.189340       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:05.272995       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:05.273047       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:06.559880       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:06.560052       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:07.442434       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:07.442571       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:36.854829       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:36.854991       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:43.449105       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:43.449147       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:43.665336       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:43.665544       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:09:45.350973       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:09:45.351019       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 18:10:09.694182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="59.716923ms"
	I0717 18:10:09.710638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="16.318005ms"
	I0717 18:10:09.710717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="37.814µs"
	I0717 18:10:11.867888       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 18:10:11.876274       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.3µs"
	I0717 18:10:11.879427       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0717 18:10:13.225395       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.502315ms"
	I0717 18:10:13.226168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="40.612µs"
	W0717 18:10:14.478433       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:10:14.478556       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] <==
	I0717 18:05:13.926347       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:05:14.044500       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	I0717 18:05:14.224118       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:05:14.224155       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:05:14.224174       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:05:14.230933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:05:14.231144       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:05:14.231156       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:05:14.243337       1 config.go:192] "Starting service config controller"
	I0717 18:05:14.243354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:05:14.243378       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:05:14.243381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:05:14.248450       1 config.go:319] "Starting node config controller"
	I0717 18:05:14.248462       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:05:14.350824       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:05:14.350868       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:05:14.351083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] <==
	W0717 18:04:55.818147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:04:55.818190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:04:55.985082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:04:55.985173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:04:55.995880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:04:55.996586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:04:56.052927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.053008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.059173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:04:56.059264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:04:56.081055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:04:56.081099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:04:56.141145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:04:56.141277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:04:56.184643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.184736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.202568       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:04:56.202652       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:04:56.219280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:04:56.219369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:04:56.248335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.248499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.270013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:04:56.270104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:04:58.651442       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:10:09 addons-453453 kubelet[1277]: I0717 18:10:09.684287    1277 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d8d775-ebb1-49e1-ab0f-c444ed5d0f0f" containerName="task-pv-container"
	Jul 17 18:10:09 addons-453453 kubelet[1277]: I0717 18:10:09.816109    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs9tq\" (UniqueName: \"kubernetes.io/projected/9b273295-d4f1-43aa-b0ef-d148763f6593-kube-api-access-rs9tq\") pod \"hello-world-app-6778b5fc9f-6bfmd\" (UID: \"9b273295-d4f1-43aa-b0ef-d148763f6593\") " pod="default/hello-world-app-6778b5fc9f-6bfmd"
	Jul 17 18:10:09 addons-453453 kubelet[1277]: I0717 18:10:09.816213    1277 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9b273295-d4f1-43aa-b0ef-d148763f6593-gcp-creds\") pod \"hello-world-app-6778b5fc9f-6bfmd\" (UID: \"9b273295-d4f1-43aa-b0ef-d148763f6593\") " pod="default/hello-world-app-6778b5fc9f-6bfmd"
	Jul 17 18:10:10 addons-453453 kubelet[1277]: I0717 18:10:10.925627    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmnwb\" (UniqueName: \"kubernetes.io/projected/62d0dcb4-1d9b-4177-b580-84291702a582-kube-api-access-lmnwb\") pod \"62d0dcb4-1d9b-4177-b580-84291702a582\" (UID: \"62d0dcb4-1d9b-4177-b580-84291702a582\") "
	Jul 17 18:10:10 addons-453453 kubelet[1277]: I0717 18:10:10.930096    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d0dcb4-1d9b-4177-b580-84291702a582-kube-api-access-lmnwb" (OuterVolumeSpecName: "kube-api-access-lmnwb") pod "62d0dcb4-1d9b-4177-b580-84291702a582" (UID: "62d0dcb4-1d9b-4177-b580-84291702a582"). InnerVolumeSpecName "kube-api-access-lmnwb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:10:11 addons-453453 kubelet[1277]: I0717 18:10:11.026493    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lmnwb\" (UniqueName: \"kubernetes.io/projected/62d0dcb4-1d9b-4177-b580-84291702a582-kube-api-access-lmnwb\") on node \"addons-453453\" DevicePath \"\""
	Jul 17 18:10:11 addons-453453 kubelet[1277]: I0717 18:10:11.178498    1277 scope.go:117] "RemoveContainer" containerID="75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48"
	Jul 17 18:10:11 addons-453453 kubelet[1277]: I0717 18:10:11.217295    1277 scope.go:117] "RemoveContainer" containerID="75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48"
	Jul 17 18:10:11 addons-453453 kubelet[1277]: E0717 18:10:11.218087    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48\": container with ID starting with 75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48 not found: ID does not exist" containerID="75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48"
	Jul 17 18:10:11 addons-453453 kubelet[1277]: I0717 18:10:11.218306    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48"} err="failed to get container status \"75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48\": rpc error: code = NotFound desc = could not find container \"75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48\": container with ID starting with 75ed1c6cc131c7a34b545fd84e82721fcdc4acc67af397306aceba7a60d99e48 not found: ID does not exist"
	Jul 17 18:10:11 addons-453453 kubelet[1277]: I0717 18:10:11.802453    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d0dcb4-1d9b-4177-b580-84291702a582" path="/var/lib/kubelet/pods/62d0dcb4-1d9b-4177-b580-84291702a582/volumes"
	Jul 17 18:10:13 addons-453453 kubelet[1277]: I0717 18:10:13.214342    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-6bfmd" podStartSLOduration=1.661419327 podStartE2EDuration="4.214309159s" podCreationTimestamp="2024-07-17 18:10:09 +0000 UTC" firstStartedPulling="2024-07-17 18:10:10.254691757 +0000 UTC m=+312.602311560" lastFinishedPulling="2024-07-17 18:10:12.807581587 +0000 UTC m=+315.155201392" observedRunningTime="2024-07-17 18:10:13.213459976 +0000 UTC m=+315.561079798" watchObservedRunningTime="2024-07-17 18:10:13.214309159 +0000 UTC m=+315.561928979"
	Jul 17 18:10:13 addons-453453 kubelet[1277]: I0717 18:10:13.799961    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05308e26-c541-463f-b368-552cc3c07fa1" path="/var/lib/kubelet/pods/05308e26-c541-463f-b368-552cc3c07fa1/volumes"
	Jul 17 18:10:13 addons-453453 kubelet[1277]: I0717 18:10:13.800445    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79200b8-349b-4aaa-b7fd-ec6030c13900" path="/var/lib/kubelet/pods/b79200b8-349b-4aaa-b7fd-ec6030c13900/volumes"
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.165262    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq2zt\" (UniqueName: \"kubernetes.io/projected/a322614a-b8dc-4486-8666-27a4d1165a14-kube-api-access-fq2zt\") pod \"a322614a-b8dc-4486-8666-27a4d1165a14\" (UID: \"a322614a-b8dc-4486-8666-27a4d1165a14\") "
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.165309    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a322614a-b8dc-4486-8666-27a4d1165a14-webhook-cert\") pod \"a322614a-b8dc-4486-8666-27a4d1165a14\" (UID: \"a322614a-b8dc-4486-8666-27a4d1165a14\") "
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.169874    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a322614a-b8dc-4486-8666-27a4d1165a14-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a322614a-b8dc-4486-8666-27a4d1165a14" (UID: "a322614a-b8dc-4486-8666-27a4d1165a14"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.171957    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a322614a-b8dc-4486-8666-27a4d1165a14-kube-api-access-fq2zt" (OuterVolumeSpecName: "kube-api-access-fq2zt") pod "a322614a-b8dc-4486-8666-27a4d1165a14" (UID: "a322614a-b8dc-4486-8666-27a4d1165a14"). InnerVolumeSpecName "kube-api-access-fq2zt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.213674    1277 scope.go:117] "RemoveContainer" containerID="873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43"
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.235535    1277 scope.go:117] "RemoveContainer" containerID="873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43"
	Jul 17 18:10:15 addons-453453 kubelet[1277]: E0717 18:10:15.236129    1277 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43\": container with ID starting with 873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43 not found: ID does not exist" containerID="873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43"
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.236164    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43"} err="failed to get container status \"873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43\": rpc error: code = NotFound desc = could not find container \"873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43\": container with ID starting with 873fdedb8e30e5e371deb5db12d4574c7236a39f18559bbaab1929a12149bd43 not found: ID does not exist"
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.266223    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fq2zt\" (UniqueName: \"kubernetes.io/projected/a322614a-b8dc-4486-8666-27a4d1165a14-kube-api-access-fq2zt\") on node \"addons-453453\" DevicePath \"\""
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.266268    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a322614a-b8dc-4486-8666-27a4d1165a14-webhook-cert\") on node \"addons-453453\" DevicePath \"\""
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.800391    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a322614a-b8dc-4486-8666-27a4d1165a14" path="/var/lib/kubelet/pods/a322614a-b8dc-4486-8666-27a4d1165a14/volumes"
	
	
	==> storage-provisioner [3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18] <==
	I0717 18:05:18.735487       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:05:18.829905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:05:18.829965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:05:18.863117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:05:18.864577       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445c8ab1-cca9-4774-8f14-886e434338d5", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0 became leader
	I0717 18:05:18.893051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0!
	I0717 18:05:18.997440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-453453 -n addons-453453
helpers_test.go:261: (dbg) Run:  kubectl --context addons-453453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (163.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (339.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.368396ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-5m4fv" [886d3903-d44e-489c-bf8d-be11494d150b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006123175s
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (119.680442ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 2m32.745113251s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (66.836687ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 2m37.103201715s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (81.308261ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 2m43.773445751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (63.600505ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 2m48.269522778s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (76.366471ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 2m57.270124233s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (69.277577ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 3m7.957502557s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (73.183675ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 3m37.023159265s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (65.58605ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 4m13.755922645s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (67.263223ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 5m24.544874729s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (64.367842ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 6m44.249355383s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-453453 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-453453 top pods -n kube-system: exit status 1 (66.189071ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-wpzc7, age: 8m4.460051691s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-453453 -n addons-453453
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 logs -n 25: (1.395771164s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-188993                                                                     | download-only-188993 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-013846                                                                     | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-669228                                                                     | download-only-669228 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| delete  | -p download-only-188993                                                                     | download-only-188993 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-742633 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | binary-mirror-742633                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38237                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-742633                                                                     | binary-mirror-742633 | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC |                     |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-453453 --wait=true                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:04 UTC | 17 Jul 24 18:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | -p addons-453453                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | -p addons-453453                                                                            |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-453453 ip                                                                            | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	| addons  | disable inspektor-gadget -p                                                                 | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | addons-453453                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-453453 ssh cat                                                                       | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:07 UTC |
	|         | /opt/local-path-provisioner/pvc-78518099-7f58-4e6b-b950-2bfc9e8ecd09_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC | 17 Jul 24 18:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-453453 ssh curl -s                                                                   | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-453453 addons                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:08 UTC | 17 Jul 24 18:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-453453 addons                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:08 UTC | 17 Jul 24 18:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-453453 ip                                                                            | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-453453 addons disable                                                                | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:10 UTC | 17 Jul 24 18:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-453453 addons                                                                        | addons-453453        | jenkins | v1.33.1 | 17 Jul 24 18:13 UTC | 17 Jul 24 18:13 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:04:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:04:20.238019  401374 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:04:20.238276  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:04:20.238286  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:04:20.238290  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:04:20.238492  401374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:04:20.239079  401374 out.go:298] Setting JSON to false
	I0717 18:04:20.239977  401374 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6403,"bootTime":1721233057,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:04:20.240035  401374 start.go:139] virtualization: kvm guest
	I0717 18:04:20.242322  401374 out.go:177] * [addons-453453] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:04:20.243713  401374 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:04:20.243764  401374 notify.go:220] Checking for updates...
	I0717 18:04:20.246141  401374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:04:20.247315  401374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:04:20.248548  401374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:20.249831  401374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:04:20.250986  401374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:04:20.252368  401374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:04:20.284093  401374 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:04:20.285368  401374 start.go:297] selected driver: kvm2
	I0717 18:04:20.285386  401374 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:04:20.285399  401374 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:04:20.286100  401374 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:04:20.286194  401374 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:04:20.301062  401374 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:04:20.301117  401374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:04:20.301348  401374 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:04:20.301384  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:20.301395  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:20.301412  401374 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:04:20.301489  401374 start.go:340] cluster config:
	{Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:04:20.301621  401374 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:04:20.303284  401374 out.go:177] * Starting "addons-453453" primary control-plane node in "addons-453453" cluster
	I0717 18:04:20.304511  401374 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:04:20.304552  401374 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:04:20.304576  401374 cache.go:56] Caching tarball of preloaded images
	I0717 18:04:20.304653  401374 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:04:20.304672  401374 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:04:20.304989  401374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json ...
	I0717 18:04:20.305018  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json: {Name:mkbb6ecf8797c490e907fa1b568b86907773cade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:20.305171  401374 start.go:360] acquireMachinesLock for addons-453453: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:04:20.305233  401374 start.go:364] duration metric: took 45.399µs to acquireMachinesLock for "addons-453453"
	I0717 18:04:20.305258  401374 start.go:93] Provisioning new machine with config: &{Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:04:20.305325  401374 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:04:20.306805  401374 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 18:04:20.306923  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:04:20.306961  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:04:20.321739  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33663
	I0717 18:04:20.322289  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:04:20.323043  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:04:20.323070  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:04:20.323409  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:04:20.323612  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:20.323743  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:20.323883  401374 start.go:159] libmachine.API.Create for "addons-453453" (driver="kvm2")
	I0717 18:04:20.323916  401374 client.go:168] LocalClient.Create starting
	I0717 18:04:20.323956  401374 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:04:20.518397  401374 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:04:20.607035  401374 main.go:141] libmachine: Running pre-create checks...
	I0717 18:04:20.607059  401374 main.go:141] libmachine: (addons-453453) Calling .PreCreateCheck
	I0717 18:04:20.607635  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:20.608126  401374 main.go:141] libmachine: Creating machine...
	I0717 18:04:20.608142  401374 main.go:141] libmachine: (addons-453453) Calling .Create
	I0717 18:04:20.608281  401374 main.go:141] libmachine: (addons-453453) Creating KVM machine...
	I0717 18:04:20.609537  401374 main.go:141] libmachine: (addons-453453) DBG | found existing default KVM network
	I0717 18:04:20.610369  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.610236  401396 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0717 18:04:20.610439  401374 main.go:141] libmachine: (addons-453453) DBG | created network xml: 
	I0717 18:04:20.610463  401374 main.go:141] libmachine: (addons-453453) DBG | <network>
	I0717 18:04:20.610474  401374 main.go:141] libmachine: (addons-453453) DBG |   <name>mk-addons-453453</name>
	I0717 18:04:20.610488  401374 main.go:141] libmachine: (addons-453453) DBG |   <dns enable='no'/>
	I0717 18:04:20.610499  401374 main.go:141] libmachine: (addons-453453) DBG |   
	I0717 18:04:20.610513  401374 main.go:141] libmachine: (addons-453453) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 18:04:20.610525  401374 main.go:141] libmachine: (addons-453453) DBG |     <dhcp>
	I0717 18:04:20.610534  401374 main.go:141] libmachine: (addons-453453) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 18:04:20.610544  401374 main.go:141] libmachine: (addons-453453) DBG |     </dhcp>
	I0717 18:04:20.610551  401374 main.go:141] libmachine: (addons-453453) DBG |   </ip>
	I0717 18:04:20.610560  401374 main.go:141] libmachine: (addons-453453) DBG |   
	I0717 18:04:20.610568  401374 main.go:141] libmachine: (addons-453453) DBG | </network>
	I0717 18:04:20.610596  401374 main.go:141] libmachine: (addons-453453) DBG | 
	I0717 18:04:20.615817  401374 main.go:141] libmachine: (addons-453453) DBG | trying to create private KVM network mk-addons-453453 192.168.39.0/24...
	I0717 18:04:20.679873  401374 main.go:141] libmachine: (addons-453453) DBG | private KVM network mk-addons-453453 192.168.39.0/24 created
	I0717 18:04:20.679908  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.679830  401396 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:20.679920  401374 main.go:141] libmachine: (addons-453453) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 ...
	I0717 18:04:20.679939  401374 main.go:141] libmachine: (addons-453453) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:04:20.679954  401374 main.go:141] libmachine: (addons-453453) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:04:20.960801  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:20.960616  401396 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa...
	I0717 18:04:21.027115  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:21.026956  401396 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/addons-453453.rawdisk...
	I0717 18:04:21.027151  401374 main.go:141] libmachine: (addons-453453) DBG | Writing magic tar header
	I0717 18:04:21.027162  401374 main.go:141] libmachine: (addons-453453) DBG | Writing SSH key tar header
	I0717 18:04:21.027171  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:21.027077  401396 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 ...
	I0717 18:04:21.027184  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453
	I0717 18:04:21.027219  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453 (perms=drwx------)
	I0717 18:04:21.027245  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:04:21.027258  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:04:21.027286  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:04:21.027317  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:04:21.027333  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:04:21.027344  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:04:21.027363  401374 main.go:141] libmachine: (addons-453453) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:04:21.027377  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:04:21.027389  401374 main.go:141] libmachine: (addons-453453) Creating domain...
	I0717 18:04:21.027407  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:04:21.027423  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:04:21.027438  401374 main.go:141] libmachine: (addons-453453) DBG | Checking permissions on dir: /home
	I0717 18:04:21.027449  401374 main.go:141] libmachine: (addons-453453) DBG | Skipping /home - not owner
	I0717 18:04:21.028343  401374 main.go:141] libmachine: (addons-453453) define libvirt domain using xml: 
	I0717 18:04:21.028363  401374 main.go:141] libmachine: (addons-453453) <domain type='kvm'>
	I0717 18:04:21.028372  401374 main.go:141] libmachine: (addons-453453)   <name>addons-453453</name>
	I0717 18:04:21.028380  401374 main.go:141] libmachine: (addons-453453)   <memory unit='MiB'>4000</memory>
	I0717 18:04:21.028411  401374 main.go:141] libmachine: (addons-453453)   <vcpu>2</vcpu>
	I0717 18:04:21.028423  401374 main.go:141] libmachine: (addons-453453)   <features>
	I0717 18:04:21.028431  401374 main.go:141] libmachine: (addons-453453)     <acpi/>
	I0717 18:04:21.028437  401374 main.go:141] libmachine: (addons-453453)     <apic/>
	I0717 18:04:21.028445  401374 main.go:141] libmachine: (addons-453453)     <pae/>
	I0717 18:04:21.028451  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028456  401374 main.go:141] libmachine: (addons-453453)   </features>
	I0717 18:04:21.028465  401374 main.go:141] libmachine: (addons-453453)   <cpu mode='host-passthrough'>
	I0717 18:04:21.028469  401374 main.go:141] libmachine: (addons-453453)   
	I0717 18:04:21.028504  401374 main.go:141] libmachine: (addons-453453)   </cpu>
	I0717 18:04:21.028511  401374 main.go:141] libmachine: (addons-453453)   <os>
	I0717 18:04:21.028518  401374 main.go:141] libmachine: (addons-453453)     <type>hvm</type>
	I0717 18:04:21.028525  401374 main.go:141] libmachine: (addons-453453)     <boot dev='cdrom'/>
	I0717 18:04:21.028529  401374 main.go:141] libmachine: (addons-453453)     <boot dev='hd'/>
	I0717 18:04:21.028534  401374 main.go:141] libmachine: (addons-453453)     <bootmenu enable='no'/>
	I0717 18:04:21.028540  401374 main.go:141] libmachine: (addons-453453)   </os>
	I0717 18:04:21.028545  401374 main.go:141] libmachine: (addons-453453)   <devices>
	I0717 18:04:21.028552  401374 main.go:141] libmachine: (addons-453453)     <disk type='file' device='cdrom'>
	I0717 18:04:21.028563  401374 main.go:141] libmachine: (addons-453453)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/boot2docker.iso'/>
	I0717 18:04:21.028568  401374 main.go:141] libmachine: (addons-453453)       <target dev='hdc' bus='scsi'/>
	I0717 18:04:21.028574  401374 main.go:141] libmachine: (addons-453453)       <readonly/>
	I0717 18:04:21.028577  401374 main.go:141] libmachine: (addons-453453)     </disk>
	I0717 18:04:21.028605  401374 main.go:141] libmachine: (addons-453453)     <disk type='file' device='disk'>
	I0717 18:04:21.028622  401374 main.go:141] libmachine: (addons-453453)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:04:21.028631  401374 main.go:141] libmachine: (addons-453453)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/addons-453453.rawdisk'/>
	I0717 18:04:21.028640  401374 main.go:141] libmachine: (addons-453453)       <target dev='hda' bus='virtio'/>
	I0717 18:04:21.028645  401374 main.go:141] libmachine: (addons-453453)     </disk>
	I0717 18:04:21.028654  401374 main.go:141] libmachine: (addons-453453)     <interface type='network'>
	I0717 18:04:21.028660  401374 main.go:141] libmachine: (addons-453453)       <source network='mk-addons-453453'/>
	I0717 18:04:21.028672  401374 main.go:141] libmachine: (addons-453453)       <model type='virtio'/>
	I0717 18:04:21.028677  401374 main.go:141] libmachine: (addons-453453)     </interface>
	I0717 18:04:21.028685  401374 main.go:141] libmachine: (addons-453453)     <interface type='network'>
	I0717 18:04:21.028690  401374 main.go:141] libmachine: (addons-453453)       <source network='default'/>
	I0717 18:04:21.028695  401374 main.go:141] libmachine: (addons-453453)       <model type='virtio'/>
	I0717 18:04:21.028703  401374 main.go:141] libmachine: (addons-453453)     </interface>
	I0717 18:04:21.028720  401374 main.go:141] libmachine: (addons-453453)     <serial type='pty'>
	I0717 18:04:21.028730  401374 main.go:141] libmachine: (addons-453453)       <target port='0'/>
	I0717 18:04:21.028733  401374 main.go:141] libmachine: (addons-453453)     </serial>
	I0717 18:04:21.028739  401374 main.go:141] libmachine: (addons-453453)     <console type='pty'>
	I0717 18:04:21.028744  401374 main.go:141] libmachine: (addons-453453)       <target type='serial' port='0'/>
	I0717 18:04:21.028751  401374 main.go:141] libmachine: (addons-453453)     </console>
	I0717 18:04:21.028756  401374 main.go:141] libmachine: (addons-453453)     <rng model='virtio'>
	I0717 18:04:21.028764  401374 main.go:141] libmachine: (addons-453453)       <backend model='random'>/dev/random</backend>
	I0717 18:04:21.028770  401374 main.go:141] libmachine: (addons-453453)     </rng>
	I0717 18:04:21.028777  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028781  401374 main.go:141] libmachine: (addons-453453)     
	I0717 18:04:21.028786  401374 main.go:141] libmachine: (addons-453453)   </devices>
	I0717 18:04:21.028793  401374 main.go:141] libmachine: (addons-453453) </domain>
	I0717 18:04:21.028800  401374 main.go:141] libmachine: (addons-453453) 
	I0717 18:04:21.034791  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:cb:e5:7a in network default
	I0717 18:04:21.035323  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:21.035338  401374 main.go:141] libmachine: (addons-453453) Ensuring networks are active...
	I0717 18:04:21.036117  401374 main.go:141] libmachine: (addons-453453) Ensuring network default is active
	I0717 18:04:21.036418  401374 main.go:141] libmachine: (addons-453453) Ensuring network mk-addons-453453 is active
	I0717 18:04:21.037148  401374 main.go:141] libmachine: (addons-453453) Getting domain xml...
	I0717 18:04:21.038033  401374 main.go:141] libmachine: (addons-453453) Creating domain...
	I0717 18:04:22.427339  401374 main.go:141] libmachine: (addons-453453) Waiting to get IP...
	I0717 18:04:22.428129  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:22.428575  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:22.428623  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:22.428482  401396 retry.go:31] will retry after 275.951356ms: waiting for machine to come up
	I0717 18:04:22.705991  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:22.706535  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:22.706558  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:22.706485  401396 retry.go:31] will retry after 356.482479ms: waiting for machine to come up
	I0717 18:04:23.065082  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:23.065542  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:23.065569  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:23.065486  401396 retry.go:31] will retry after 375.44866ms: waiting for machine to come up
	I0717 18:04:23.442207  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:23.442672  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:23.442704  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:23.442620  401396 retry.go:31] will retry after 574.721034ms: waiting for machine to come up
	I0717 18:04:24.019349  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:24.019714  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:24.019746  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:24.019670  401396 retry.go:31] will retry after 600.599028ms: waiting for machine to come up
	I0717 18:04:24.621492  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:24.621953  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:24.621979  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:24.621895  401396 retry.go:31] will retry after 626.183649ms: waiting for machine to come up
	I0717 18:04:25.249582  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:25.250011  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:25.250033  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:25.249973  401396 retry.go:31] will retry after 834.131686ms: waiting for machine to come up
	I0717 18:04:26.085481  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:26.085850  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:26.085875  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:26.085792  401396 retry.go:31] will retry after 1.480433748s: waiting for machine to come up
	I0717 18:04:27.568563  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:27.568882  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:27.568911  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:27.568828  401396 retry.go:31] will retry after 1.138683509s: waiting for machine to come up
	I0717 18:04:28.709179  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:28.709602  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:28.709633  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:28.709563  401396 retry.go:31] will retry after 1.557250255s: waiting for machine to come up
	I0717 18:04:30.269361  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:30.269795  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:30.269817  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:30.269747  401396 retry.go:31] will retry after 2.866762957s: waiting for machine to come up
	I0717 18:04:33.140224  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:33.140607  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:33.140672  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:33.140536  401396 retry.go:31] will retry after 3.093750833s: waiting for machine to come up
	I0717 18:04:36.236265  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:36.236662  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find current IP address of domain addons-453453 in network mk-addons-453453
	I0717 18:04:36.236685  401374 main.go:141] libmachine: (addons-453453) DBG | I0717 18:04:36.236609  401396 retry.go:31] will retry after 4.356080984s: waiting for machine to come up
	I0717 18:04:40.593935  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.594279  401374 main.go:141] libmachine: (addons-453453) Found IP for machine: 192.168.39.136
	I0717 18:04:40.594304  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has current primary IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.594310  401374 main.go:141] libmachine: (addons-453453) Reserving static IP address...
	I0717 18:04:40.594935  401374 main.go:141] libmachine: (addons-453453) DBG | unable to find host DHCP lease matching {name: "addons-453453", mac: "52:54:00:43:b0:91", ip: "192.168.39.136"} in network mk-addons-453453
	I0717 18:04:40.665266  401374 main.go:141] libmachine: (addons-453453) Reserved static IP address: 192.168.39.136
	I0717 18:04:40.665297  401374 main.go:141] libmachine: (addons-453453) DBG | Getting to WaitForSSH function...
	I0717 18:04:40.665305  401374 main.go:141] libmachine: (addons-453453) Waiting for SSH to be available...
	I0717 18:04:40.667541  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.667883  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:minikube Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.667914  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.668098  401374 main.go:141] libmachine: (addons-453453) DBG | Using SSH client type: external
	I0717 18:04:40.668118  401374 main.go:141] libmachine: (addons-453453) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa (-rw-------)
	I0717 18:04:40.668163  401374 main.go:141] libmachine: (addons-453453) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:04:40.668180  401374 main.go:141] libmachine: (addons-453453) DBG | About to run SSH command:
	I0717 18:04:40.668202  401374 main.go:141] libmachine: (addons-453453) DBG | exit 0
	I0717 18:04:40.788260  401374 main.go:141] libmachine: (addons-453453) DBG | SSH cmd err, output: <nil>: 
	I0717 18:04:40.788555  401374 main.go:141] libmachine: (addons-453453) KVM machine creation complete!
	I0717 18:04:40.788889  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:40.789484  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:40.789647  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:40.789840  401374 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:04:40.789857  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:04:40.791095  401374 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:04:40.791113  401374 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:04:40.791130  401374 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:04:40.791140  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.793438  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.793778  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.793798  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.793946  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.794115  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.794300  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.794423  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.794615  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.794817  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.794827  401374 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:04:40.891999  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:04:40.892031  401374 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:04:40.892041  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.894706  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.895110  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.895138  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.895296  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.895505  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.895667  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.895778  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.896005  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.896215  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.896231  401374 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:04:40.992946  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:04:40.993009  401374 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:04:40.993016  401374 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:04:40.993023  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:40.993257  401374 buildroot.go:166] provisioning hostname "addons-453453"
	I0717 18:04:40.993281  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:40.993476  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:40.996076  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.996367  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:40.996396  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:40.996600  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:40.996830  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.996986  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:40.997128  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:40.997261  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:40.997490  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:40.997509  401374 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-453453 && echo "addons-453453" | sudo tee /etc/hostname
	I0717 18:04:41.112003  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-453453
	
	I0717 18:04:41.112033  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.114800  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.115132  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.115165  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.115298  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.115445  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.115646  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.115775  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.115942  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.116153  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.116172  401374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-453453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-453453/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-453453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:04:41.226808  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:04:41.226841  401374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:04:41.226865  401374 buildroot.go:174] setting up certificates
	I0717 18:04:41.226875  401374 provision.go:84] configureAuth start
	I0717 18:04:41.226883  401374 main.go:141] libmachine: (addons-453453) Calling .GetMachineName
	I0717 18:04:41.227229  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.229991  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.230300  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.230329  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.230528  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.233153  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.233480  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.233506  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.233662  401374 provision.go:143] copyHostCerts
	I0717 18:04:41.233818  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:04:41.234005  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:04:41.234089  401374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:04:41.234153  401374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.addons-453453 san=[127.0.0.1 192.168.39.136 addons-453453 localhost minikube]
	I0717 18:04:41.350196  401374 provision.go:177] copyRemoteCerts
	I0717 18:04:41.350265  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:04:41.350306  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.352877  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.353209  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.353239  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.353380  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.353612  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.353791  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.353971  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.436436  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:04:41.460054  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:04:41.482565  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:04:41.504940  401374 provision.go:87] duration metric: took 278.052103ms to configureAuth
	I0717 18:04:41.504968  401374 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:04:41.505132  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:04:41.505207  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.508114  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.508502  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.508537  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.508672  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.508898  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.509120  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.509269  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.509470  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.509656  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.509677  401374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:04:41.761132  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:04:41.761159  401374 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:04:41.761169  401374 main.go:141] libmachine: (addons-453453) Calling .GetURL
	I0717 18:04:41.762716  401374 main.go:141] libmachine: (addons-453453) DBG | Using libvirt version 6000000
	I0717 18:04:41.766112  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.766499  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.766540  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.766654  401374 main.go:141] libmachine: Docker is up and running!
	I0717 18:04:41.766672  401374 main.go:141] libmachine: Reticulating splines...
	I0717 18:04:41.766681  401374 client.go:171] duration metric: took 21.442756087s to LocalClient.Create
	I0717 18:04:41.766707  401374 start.go:167] duration metric: took 21.442826772s to libmachine.API.Create "addons-453453"
	I0717 18:04:41.766719  401374 start.go:293] postStartSetup for "addons-453453" (driver="kvm2")
	I0717 18:04:41.766729  401374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:04:41.766748  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.766991  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:04:41.767017  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.768962  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.769187  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.769211  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.769338  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.769523  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.769690  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.769840  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.851033  401374 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:04:41.855354  401374 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:04:41.855382  401374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:04:41.855455  401374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:04:41.855482  401374 start.go:296] duration metric: took 88.757268ms for postStartSetup
	I0717 18:04:41.855526  401374 main.go:141] libmachine: (addons-453453) Calling .GetConfigRaw
	I0717 18:04:41.856089  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.858768  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.859084  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.859119  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.859355  401374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/config.json ...
	I0717 18:04:41.859531  401374 start.go:128] duration metric: took 21.55418822s to createHost
	I0717 18:04:41.859553  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.861892  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.862185  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.862205  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.862330  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.862491  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.862658  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.862760  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.862934  401374 main.go:141] libmachine: Using SSH client type: native
	I0717 18:04:41.863115  401374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I0717 18:04:41.863129  401374 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:04:41.960997  401374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239481.935458155
	
	I0717 18:04:41.961024  401374 fix.go:216] guest clock: 1721239481.935458155
	I0717 18:04:41.961040  401374 fix.go:229] Guest: 2024-07-17 18:04:41.935458155 +0000 UTC Remote: 2024-07-17 18:04:41.859542036 +0000 UTC m=+21.655321364 (delta=75.916119ms)
	I0717 18:04:41.961097  401374 fix.go:200] guest clock delta is within tolerance: 75.916119ms
	I0717 18:04:41.961108  401374 start.go:83] releasing machines lock for "addons-453453", held for 21.655862836s
	I0717 18:04:41.961140  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.961444  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:41.964264  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.964676  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.964701  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.964855  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965399  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965598  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:04:41.965713  401374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:04:41.965780  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.965813  401374 ssh_runner.go:195] Run: cat /version.json
	I0717 18:04:41.965837  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:04:41.968520  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.968918  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.968944  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969009  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969098  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.969261  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.969433  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.969456  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:41.969476  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:41.969641  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:04:41.969747  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:41.969770  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:04:41.969934  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:04:41.970098  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:04:42.067229  401374 ssh_runner.go:195] Run: systemctl --version
	I0717 18:04:42.073039  401374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:04:42.233292  401374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:04:42.239429  401374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:04:42.239495  401374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:04:42.255796  401374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:04:42.255824  401374 start.go:495] detecting cgroup driver to use...
	I0717 18:04:42.255910  401374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:04:42.271553  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:04:42.284805  401374 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:04:42.284870  401374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:04:42.298507  401374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:04:42.311587  401374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:04:42.420275  401374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:04:42.559226  401374 docker.go:233] disabling docker service ...
	I0717 18:04:42.559312  401374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:04:42.573381  401374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:04:42.585885  401374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:04:42.711110  401374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:04:42.838705  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:04:42.852306  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:04:42.869920  401374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:04:42.869978  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.880071  401374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:04:42.880130  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.890387  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.900425  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.910537  401374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:04:42.920866  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.930972  401374 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.947623  401374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:04:42.957841  401374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:04:42.966817  401374 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:04:42.966870  401374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:04:42.979516  401374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:04:42.988650  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:04:43.104191  401374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:04:43.235666  401374 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:04:43.235770  401374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:04:43.240382  401374 start.go:563] Will wait 60s for crictl version
	I0717 18:04:43.240459  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:04:43.244006  401374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:04:43.282118  401374 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:04:43.282222  401374 ssh_runner.go:195] Run: crio --version
	I0717 18:04:43.310168  401374 ssh_runner.go:195] Run: crio --version
	I0717 18:04:43.339548  401374 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:04:43.340692  401374 main.go:141] libmachine: (addons-453453) Calling .GetIP
	I0717 18:04:43.343416  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:43.343720  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:04:43.343749  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:04:43.344014  401374 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:04:43.348093  401374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:04:43.361061  401374 kubeadm.go:883] updating cluster {Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:04:43.361179  401374 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:04:43.361237  401374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:04:43.395918  401374 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:04:43.395998  401374 ssh_runner.go:195] Run: which lz4
	I0717 18:04:43.399795  401374 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:04:43.403952  401374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:04:43.403985  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:04:44.678210  401374 crio.go:462] duration metric: took 1.278436492s to copy over tarball
	I0717 18:04:44.678304  401374 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:04:46.832027  401374 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153675523s)
	I0717 18:04:46.832079  401374 crio.go:469] duration metric: took 2.153831936s to extract the tarball
	I0717 18:04:46.832091  401374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:04:46.869061  401374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:04:46.908534  401374 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:04:46.908561  401374 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:04:46.908572  401374 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.30.2 crio true true} ...
	I0717 18:04:46.908722  401374 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-453453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:04:46.908811  401374 ssh_runner.go:195] Run: crio config
	I0717 18:04:46.953055  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:46.953085  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:46.953101  401374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:04:46.953122  401374 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-453453 NodeName:addons-453453 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:04:46.953267  401374 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-453453"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:04:46.953326  401374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:04:46.963316  401374 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:04:46.963378  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:04:46.972564  401374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:04:46.988106  401374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:04:47.003249  401374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 18:04:47.018480  401374 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I0717 18:04:47.022215  401374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:04:47.033901  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:04:47.169592  401374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:04:47.185971  401374 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453 for IP: 192.168.39.136
	I0717 18:04:47.185998  401374 certs.go:194] generating shared ca certs ...
	I0717 18:04:47.186021  401374 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.186177  401374 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:04:47.344873  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt ...
	I0717 18:04:47.344905  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt: {Name:mka017c54a2048ec5188c8b3a316b09643283b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.345101  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key ...
	I0717 18:04:47.345117  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key: {Name:mkfada9ce6d628899b584576941d3e5f9fe82031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.345224  401374 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:04:47.480337  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt ...
	I0717 18:04:47.480372  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt: {Name:mk4e23d96745a6551e62956a93a29bb8c111fa53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.480587  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key ...
	I0717 18:04:47.480604  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key: {Name:mkc1f95c0ca70a76682e287edc9dc8ffe7c48cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.480705  401374 certs.go:256] generating profile certs ...
	I0717 18:04:47.480782  401374 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key
	I0717 18:04:47.480803  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt with IP's: []
	I0717 18:04:47.622758  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt ...
	I0717 18:04:47.622794  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: {Name:mk775f59f966ea9acd9c047f3474be2d435176c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.623008  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key ...
	I0717 18:04:47.623024  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.key: {Name:mk1fc9bd34eb9588b680b756e27c4a6f01cc67a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.623134  401374 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48
	I0717 18:04:47.623157  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.136]
	I0717 18:04:47.805937  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 ...
	I0717 18:04:47.805971  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48: {Name:mk2c8db22765fc408949dc2494cce2ef703a745e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.806136  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48 ...
	I0717 18:04:47.806151  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48: {Name:mkd6774392db8e9d01d1cb342f4b7173250d8c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.806222  401374 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt.33496d48 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt
	I0717 18:04:47.806293  401374 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key.33496d48 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key
	I0717 18:04:47.806337  401374 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key
	I0717 18:04:47.806354  401374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt with IP's: []
	I0717 18:04:47.939103  401374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt ...
	I0717 18:04:47.939133  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt: {Name:mk7134d15e9441f1be34b4a25ffa1d9fac41bae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.939294  401374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key ...
	I0717 18:04:47.939305  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key: {Name:mk549472ec4ec266d1c194199dbcf4f049f11375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:04:47.939470  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:04:47.939506  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:04:47.939531  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:04:47.939554  401374 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:04:47.940131  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:04:47.966665  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:04:47.988464  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:04:48.010561  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:04:48.037177  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 18:04:48.060465  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:04:48.082666  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:04:48.104419  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:04:48.127446  401374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:04:48.154038  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:04:48.170925  401374 ssh_runner.go:195] Run: openssl version
	I0717 18:04:48.176771  401374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:04:48.187191  401374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.191462  401374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.191510  401374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:04:48.197340  401374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:04:48.207474  401374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:04:48.211375  401374 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:04:48.211431  401374 kubeadm.go:392] StartCluster: {Name:addons-453453 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-453453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:04:48.211515  401374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:04:48.211566  401374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:04:48.252746  401374 cri.go:89] found id: ""
	I0717 18:04:48.252826  401374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:04:48.263131  401374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:04:48.273268  401374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:04:48.282951  401374 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:04:48.282972  401374 kubeadm.go:157] found existing configuration files:
	
	I0717 18:04:48.283016  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:04:48.292171  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:04:48.292226  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:04:48.302501  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:04:48.314007  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:04:48.314071  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:04:48.325193  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:04:48.334131  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:04:48.334187  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:04:48.343943  401374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:04:48.352841  401374 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:04:48.352890  401374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:04:48.362806  401374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:04:48.421393  401374 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:04:48.421455  401374 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:04:48.548175  401374 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:04:48.548268  401374 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:04:48.548380  401374 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:04:48.751932  401374 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:04:48.857318  401374 out.go:204]   - Generating certificates and keys ...
	I0717 18:04:48.857450  401374 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:04:48.857527  401374 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:04:48.944043  401374 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:04:49.070400  401374 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:04:49.122961  401374 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:04:49.289889  401374 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:04:49.475463  401374 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:04:49.475753  401374 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-453453 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0717 18:04:49.658320  401374 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:04:49.658519  401374 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-453453 localhost] and IPs [192.168.39.136 127.0.0.1 ::1]
	I0717 18:04:49.945905  401374 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:04:50.017826  401374 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:04:50.077618  401374 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:04:50.077883  401374 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:04:50.262277  401374 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:04:50.351077  401374 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:04:50.539512  401374 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:04:50.701755  401374 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:04:50.913907  401374 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:04:50.915004  401374 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:04:50.918636  401374 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:04:50.920399  401374 out.go:204]   - Booting up control plane ...
	I0717 18:04:50.920526  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:04:50.920620  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:04:50.921030  401374 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:04:50.936603  401374 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:04:50.937808  401374 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:04:50.937855  401374 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:04:51.058524  401374 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:04:51.058658  401374 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:04:51.559796  401374 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.750612ms
	I0717 18:04:51.559918  401374 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:04:57.057996  401374 kubeadm.go:310] [api-check] The API server is healthy after 5.502256504s
	I0717 18:04:57.070807  401374 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:04:57.085092  401374 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:04:57.114357  401374 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:04:57.114536  401374 kubeadm.go:310] [mark-control-plane] Marking the node addons-453453 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:04:57.126681  401374 kubeadm.go:310] [bootstrap-token] Using token: abmxn2.f1edq7xeq2k2tcps
	I0717 18:04:57.128121  401374 out.go:204]   - Configuring RBAC rules ...
	I0717 18:04:57.128245  401374 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:04:57.134204  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:04:57.144629  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:04:57.147626  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:04:57.150690  401374 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:04:57.154130  401374 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:04:57.464793  401374 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:04:57.901474  401374 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:04:58.464700  401374 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:04:58.465680  401374 kubeadm.go:310] 
	I0717 18:04:58.465795  401374 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:04:58.465830  401374 kubeadm.go:310] 
	I0717 18:04:58.465939  401374 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:04:58.465951  401374 kubeadm.go:310] 
	I0717 18:04:58.466004  401374 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:04:58.466091  401374 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:04:58.466171  401374 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:04:58.466185  401374 kubeadm.go:310] 
	I0717 18:04:58.466261  401374 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:04:58.466295  401374 kubeadm.go:310] 
	I0717 18:04:58.466382  401374 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:04:58.466400  401374 kubeadm.go:310] 
	I0717 18:04:58.466483  401374 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:04:58.466577  401374 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:04:58.466673  401374 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:04:58.466682  401374 kubeadm.go:310] 
	I0717 18:04:58.466796  401374 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:04:58.466916  401374 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:04:58.466928  401374 kubeadm.go:310] 
	I0717 18:04:58.467002  401374 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token abmxn2.f1edq7xeq2k2tcps \
	I0717 18:04:58.467092  401374 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 18:04:58.467113  401374 kubeadm.go:310] 	--control-plane 
	I0717 18:04:58.467117  401374 kubeadm.go:310] 
	I0717 18:04:58.467225  401374 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:04:58.467255  401374 kubeadm.go:310] 
	I0717 18:04:58.467362  401374 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token abmxn2.f1edq7xeq2k2tcps \
	I0717 18:04:58.467494  401374 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 18:04:58.467647  401374 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:04:58.467770  401374 cni.go:84] Creating CNI manager for ""
	I0717 18:04:58.467789  401374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:04:58.469449  401374 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:04:58.470561  401374 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:04:58.481718  401374 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:04:58.512316  401374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:04:58.512409  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:58.512429  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-453453 minikube.k8s.io/updated_at=2024_07_17T18_04_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=addons-453453 minikube.k8s.io/primary=true
	I0717 18:04:58.537038  401374 ops.go:34] apiserver oom_adj: -16
	I0717 18:04:58.658298  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:59.158767  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:04:59.658591  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:00.159162  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:00.659074  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:01.158649  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:01.658349  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:02.158619  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:02.659173  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:03.159319  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:03.658321  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:04.158335  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:04.658553  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:05.159110  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:05.659114  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:06.159298  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:06.659138  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:07.159002  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:07.659099  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:08.158583  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:08.659093  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:09.159051  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:09.658358  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.159121  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.658390  401374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:05:10.749327  401374 kubeadm.go:1113] duration metric: took 12.236977163s to wait for elevateKubeSystemPrivileges
	I0717 18:05:10.749382  401374 kubeadm.go:394] duration metric: took 22.537956123s to StartCluster
	I0717 18:05:10.749405  401374 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:10.749549  401374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:05:10.750075  401374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:10.750279  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:05:10.750300  401374 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:05:10.750371  401374 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 18:05:10.750499  401374 addons.go:69] Setting yakd=true in profile "addons-453453"
	I0717 18:05:10.750515  401374 addons.go:69] Setting helm-tiller=true in profile "addons-453453"
	I0717 18:05:10.750526  401374 addons.go:69] Setting cloud-spanner=true in profile "addons-453453"
	I0717 18:05:10.750550  401374 addons.go:69] Setting registry=true in profile "addons-453453"
	I0717 18:05:10.750545  401374 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-453453"
	I0717 18:05:10.750565  401374 addons.go:69] Setting storage-provisioner=true in profile "addons-453453"
	I0717 18:05:10.750583  401374 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-453453"
	I0717 18:05:10.750584  401374 addons.go:69] Setting volumesnapshots=true in profile "addons-453453"
	I0717 18:05:10.750524  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:05:10.750591  401374 addons.go:69] Setting ingress-dns=true in profile "addons-453453"
	I0717 18:05:10.750591  401374 addons.go:69] Setting ingress=true in profile "addons-453453"
	I0717 18:05:10.750601  401374 addons.go:234] Setting addon volumesnapshots=true in "addons-453453"
	I0717 18:05:10.750601  401374 addons.go:69] Setting metrics-server=true in profile "addons-453453"
	I0717 18:05:10.750608  401374 addons.go:234] Setting addon storage-provisioner=true in "addons-453453"
	I0717 18:05:10.750614  401374 addons.go:234] Setting addon ingress-dns=true in "addons-453453"
	I0717 18:05:10.750617  401374 addons.go:234] Setting addon ingress=true in "addons-453453"
	I0717 18:05:10.750631  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750553  401374 addons.go:69] Setting gcp-auth=true in profile "addons-453453"
	I0717 18:05:10.750642  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750540  401374 addons.go:234] Setting addon yakd=true in "addons-453453"
	I0717 18:05:10.750651  401374 mustload.go:65] Loading cluster: addons-453453
	I0717 18:05:10.750660  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750674  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750676  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750538  401374 addons.go:69] Setting default-storageclass=true in profile "addons-453453"
	I0717 18:05:10.750715  401374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-453453"
	I0717 18:05:10.750830  401374 config.go:182] Loaded profile config "addons-453453": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:05:10.750986  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751017  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.750555  401374 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-453453"
	I0717 18:05:10.751061  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751065  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750585  401374 addons.go:234] Setting addon cloud-spanner=true in "addons-453453"
	I0717 18:05:10.750575  401374 addons.go:69] Setting volcano=true in profile "addons-453453"
	I0717 18:05:10.751079  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751097  401374 addons.go:234] Setting addon volcano=true in "addons-453453"
	I0717 18:05:10.751101  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751102  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751114  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751120  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751127  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751162  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750524  401374 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-453453"
	I0717 18:05:10.750586  401374 addons.go:234] Setting addon helm-tiller=true in "addons-453453"
	I0717 18:05:10.751183  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.750621  401374 addons.go:234] Setting addon metrics-server=true in "addons-453453"
	I0717 18:05:10.751212  401374 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-453453"
	I0717 18:05:10.751163  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.750631  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.750594  401374 addons.go:69] Setting inspektor-gadget=true in profile "addons-453453"
	I0717 18:05:10.750567  401374 addons.go:234] Setting addon registry=true in "addons-453453"
	I0717 18:05:10.751069  401374 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-453453"
	I0717 18:05:10.751265  401374 addons.go:234] Setting addon inspektor-gadget=true in "addons-453453"
	I0717 18:05:10.751233  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751392  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751120  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751469  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751599  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751623  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751762  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751784  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751825  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751844  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751853  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751853  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.751887  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.751903  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.751942  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.752058  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.752287  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752311  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752328  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752334  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752423  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752465  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.752570  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.752640  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.753109  401374 out.go:177] * Verifying Kubernetes components...
	I0717 18:05:10.754811  401374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:10.771453  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0717 18:05:10.771473  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0717 18:05:10.771453  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0717 18:05:10.772114  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.772218  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.772817  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.772838  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.772989  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.772999  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.773139  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.773713  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.773735  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.773785  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.774130  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.774310  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.774352  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.774772  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.774952  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0717 18:05:10.776855  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.780090  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0717 18:05:10.780428  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.780463  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.780610  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.780647  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.781063  401374 addons.go:234] Setting addon default-storageclass=true in "addons-453453"
	I0717 18:05:10.781111  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.781462  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.781503  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.789678  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0717 18:05:10.789869  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0717 18:05:10.789992  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792500  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792507  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792636  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.792670  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.792695  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794155  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.794238  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794266  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794312  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794342  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794414  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.794432  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.794773  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.794846  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.802174  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802192  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802174  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.802441  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.802935  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.802992  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.803104  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.803157  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.804739  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0717 18:05:10.805287  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.806317  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.806336  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.807046  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.808098  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.808128  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.808811  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.809176  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.809211  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.815040  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0717 18:05:10.815600  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.816096  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.816117  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.816197  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0717 18:05:10.816522  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.816595  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.816722  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.817181  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.817198  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.818669  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.818741  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0717 18:05:10.819296  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.819875  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.819899  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.820303  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.820466  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.820576  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.821520  401374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:05:10.821958  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.822002  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.823246  401374 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:05:10.823268  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:05:10.823291  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.823388  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0717 18:05:10.823418  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.824649  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.825488  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.825510  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.825643  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 18:05:10.826244  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.827276  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.827418  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 18:05:10.827442  401374 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 18:05:10.827467  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.827473  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.827543  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.827561  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.827574  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.827584  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.827898  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.828096  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.828333  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.832614  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.832623  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0717 18:05:10.832646  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.832664  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.832626  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.833021  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.833068  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.833464  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.833538  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.833568  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.833685  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.833981  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.834556  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.834598  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.838781  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36431
	I0717 18:05:10.838811  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0717 18:05:10.838785  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0717 18:05:10.839292  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839362  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839390  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.839866  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.839867  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.839888  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.839903  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.840022  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.840045  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.840259  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.840265  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.840881  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.840937  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.841383  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.841430  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.842728  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0717 18:05:10.842875  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0717 18:05:10.843047  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.843227  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.843258  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.843746  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.843765  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.844137  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.844588  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.844693  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.844713  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.845664  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.846066  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.848471  401374 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-453453"
	I0717 18:05:10.848538  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:10.848907  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.848938  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.849515  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.849791  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.849835  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.851270  401374 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 18:05:10.851995  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0717 18:05:10.852436  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.852464  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 18:05:10.852480  401374 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 18:05:10.852542  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.852920  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39879
	I0717 18:05:10.853180  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.853195  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.853630  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.853743  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.854235  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.854253  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.854538  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.854703  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.855479  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.855517  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.857017  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.857514  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.857538  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.857866  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.858094  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.858553  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I0717 18:05:10.858735  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.859015  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.859310  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.859505  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.860176  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.860195  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.860932  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.861010  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 18:05:10.861571  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.861611  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.863827  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 18:05:10.864992  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 18:05:10.865310  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0717 18:05:10.865814  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.866418  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.866435  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.866752  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40453
	I0717 18:05:10.867173  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.867409  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 18:05:10.867679  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.867702  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.868102  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.868289  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.868403  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.868652  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.869693  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 18:05:10.870167  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.871519  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 18:05:10.871547  401374 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 18:05:10.872763  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 18:05:10.873054  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:05:10.873069  401374 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:05:10.873090  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.874454  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39919
	I0717 18:05:10.874596  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41505
	I0717 18:05:10.874787  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I0717 18:05:10.874975  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.875197  401374 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 18:05:10.875416  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.875434  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.875499  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.875628  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.876118  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.876135  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.876442  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 18:05:10.876461  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 18:05:10.876508  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.876549  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.876626  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I0717 18:05:10.876681  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.876703  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.876771  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.877090  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.877217  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.877276  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.877338  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.877358  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.877433  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.877551  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.877776  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.877974  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.878121  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.878134  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.878196  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.878418  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.878477  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.879339  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.879615  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.880022  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.880820  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.880868  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.881320  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.881345  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.882204  401374 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 18:05:10.883610  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 18:05:10.883626  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 18:05:10.883645  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.883772  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.883843  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0717 18:05:10.883871  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.883942  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.884627  401374 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 18:05:10.884861  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.885172  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.885226  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.885507  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.886329  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:10.886526  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.886543  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.886615  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36111
	I0717 18:05:10.886898  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.886956  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.887232  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.887251  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.887314  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.887331  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.887888  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.887959  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.887976  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.888227  401374 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 18:05:10.888353  401374 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:05:10.888649  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 18:05:10.888460  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.888672  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.888467  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0717 18:05:10.888500  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.888850  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.888902  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.889373  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:10.889473  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.889873  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:10.889914  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:10.889640  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.890132  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.890343  401374 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 18:05:10.890355  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 18:05:10.890371  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.890434  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:10.890464  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.890489  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:10.890558  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:10.890574  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:10.890581  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:10.890798  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:10.890816  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 18:05:10.890896  401374 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 18:05:10.890800  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:10.892958  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.892975  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.893390  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.893514  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 18:05:10.893562  401374 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 18:05:10.893646  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.893775  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.894080  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.894099  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894335  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.894393  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894551  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.894722  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.894749  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.894773  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.894948  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.895012  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.895039  401374 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:05:10.895054  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 18:05:10.895072  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.895140  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.895268  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.895424  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.895762  401374 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 18:05:10.895779  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 18:05:10.895795  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.896717  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.897136  401374 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:05:10.897151  401374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:05:10.897168  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.898679  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.898938  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0717 18:05:10.899305  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.899329  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.899335  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.899532  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.899737  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.899856  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.899917  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.899954  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.899978  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.900124  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.900392  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.900457  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.900472  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.900671  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 18:05:10.900678  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.900735  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.900949  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.901096  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.901160  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.901217  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.901854  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.901872  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.902367  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.902823  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.902837  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.903025  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.903220  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.903247  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.903500  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.903722  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.903864  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.903998  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.904763  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.904815  401374 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 18:05:10.906284  401374 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 18:05:10.906342  401374 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 18:05:10.906354  401374 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 18:05:10.906373  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.909275  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.909697  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.909724  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.909780  401374 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 18:05:10.909881  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0717 18:05:10.909982  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.910163  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.910233  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.910336  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.910460  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.910699  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.910712  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.911035  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.911210  401374 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 18:05:10.911221  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 18:05:10.911233  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.911650  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:10.911687  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:10.913784  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.914133  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.914164  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.914285  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.914457  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.914611  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.914787  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:10.956750  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43207
	I0717 18:05:10.957156  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:10.957681  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:10.957710  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:10.958046  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:10.958284  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:10.959951  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:10.962156  401374 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 18:05:10.963754  401374 out.go:177]   - Using image docker.io/busybox:stable
	I0717 18:05:10.965313  401374 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 18:05:10.965331  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 18:05:10.965350  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:10.968796  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.969272  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:10.969298  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:10.969505  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:10.969738  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:10.969919  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:10.970089  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	W0717 18:05:10.972731  401374 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:58326->192.168.39.136:22: read: connection reset by peer
	I0717 18:05:10.972764  401374 retry.go:31] will retry after 168.473413ms: ssh: handshake failed: read tcp 192.168.39.1:58326->192.168.39.136:22: read: connection reset by peer
	I0717 18:05:11.159669  401374 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 18:05:11.159698  401374 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 18:05:11.267381  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 18:05:11.267414  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 18:05:11.269269  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:05:11.306620  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 18:05:11.306653  401374 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 18:05:11.319172  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:05:11.319201  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 18:05:11.335081  401374 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:05:11.335105  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 18:05:11.339854  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 18:05:11.350335  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 18:05:11.350359  401374 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 18:05:11.366658  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:05:11.416690  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 18:05:11.416739  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 18:05:11.418645  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 18:05:11.418670  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 18:05:11.425034  401374 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 18:05:11.425068  401374 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 18:05:11.429887  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:05:11.536284  401374 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:05:11.536326  401374 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 18:05:11.541347  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:05:11.597602  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 18:05:11.597632  401374 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 18:05:11.611298  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 18:05:11.612131  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:05:11.612161  401374 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:05:11.626949  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:05:11.651615  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 18:05:11.652836  401374 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 18:05:11.652858  401374 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 18:05:11.683500  401374 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 18:05:11.683529  401374 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 18:05:11.697999  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 18:05:11.698025  401374 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 18:05:11.712898  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 18:05:11.712933  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 18:05:11.741586  401374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:05:11.741594  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:05:11.794714  401374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:05:11.794747  401374 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:05:11.825304  401374 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 18:05:11.825335  401374 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 18:05:11.874955  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:05:11.986198  401374 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 18:05:11.986223  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 18:05:12.018851  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 18:05:12.018886  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 18:05:12.096189  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 18:05:12.096222  401374 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 18:05:12.125803  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:05:12.183177  401374 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 18:05:12.183221  401374 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 18:05:12.265334  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 18:05:12.322835  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 18:05:12.322865  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 18:05:12.332517  401374 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:12.332568  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 18:05:12.483477  401374 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 18:05:12.483510  401374 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 18:05:12.532675  401374 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 18:05:12.532706  401374 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 18:05:12.674145  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:12.825764  401374 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 18:05:12.825799  401374 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 18:05:12.856164  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 18:05:12.856192  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 18:05:13.115555  401374 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:05:13.115583  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 18:05:13.173886  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 18:05:13.173917  401374 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 18:05:13.308329  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:05:13.570969  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 18:05:13.571000  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 18:05:14.348971  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 18:05:14.348997  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 18:05:14.710476  401374 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:05:14.710504  401374 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 18:05:15.037771  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:05:15.541773  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.201877979s)
	I0717 18:05:15.541842  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.541856  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.541772  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.272460789s)
	I0717 18:05:15.541926  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.541942  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542320  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542358  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542372  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.542379  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.542384  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.542391  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:15.542393  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542399  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:15.542364  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:15.542674  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.542692  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:15.543978  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:15.543993  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:15.544010  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:17.974281  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 18:05:17.974336  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:17.977506  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:17.977869  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:17.977909  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:17.978094  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:17.978296  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:17.978452  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:17.978587  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:18.415897  401374 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 18:05:18.458521  401374 addons.go:234] Setting addon gcp-auth=true in "addons-453453"
	I0717 18:05:18.458594  401374 host.go:66] Checking if "addons-453453" exists ...
	I0717 18:05:18.458939  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:18.458978  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:18.476260  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I0717 18:05:18.476869  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:18.477447  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:18.477471  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:18.477881  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:18.478640  401374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:05:18.478676  401374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:05:18.494388  401374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0717 18:05:18.494873  401374 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:05:18.495415  401374 main.go:141] libmachine: Using API Version  1
	I0717 18:05:18.495447  401374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:05:18.495805  401374 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:05:18.496028  401374 main.go:141] libmachine: (addons-453453) Calling .GetState
	I0717 18:05:18.497650  401374 main.go:141] libmachine: (addons-453453) Calling .DriverName
	I0717 18:05:18.497930  401374 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 18:05:18.497965  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHHostname
	I0717 18:05:18.500887  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:18.501284  401374 main.go:141] libmachine: (addons-453453) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:b0:91", ip: ""} in network mk-addons-453453: {Iface:virbr1 ExpiryTime:2024-07-17 19:04:34 +0000 UTC Type:0 Mac:52:54:00:43:b0:91 Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:addons-453453 Clientid:01:52:54:00:43:b0:91}
	I0717 18:05:18.501316  401374 main.go:141] libmachine: (addons-453453) DBG | domain addons-453453 has defined IP address 192.168.39.136 and MAC address 52:54:00:43:b0:91 in network mk-addons-453453
	I0717 18:05:18.501424  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHPort
	I0717 18:05:18.501613  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHKeyPath
	I0717 18:05:18.501783  401374 main.go:141] libmachine: (addons-453453) Calling .GetSSHUsername
	I0717 18:05:18.501954  401374 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/addons-453453/id_rsa Username:docker}
	I0717 18:05:19.410063  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.043358433s)
	I0717 18:05:19.410138  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410161  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410157  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.98023887s)
	I0717 18:05:19.410242  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.868863165s)
	I0717 18:05:19.410285  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.79895578s)
	I0717 18:05:19.410286  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410353  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410372  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.78339412s)
	I0717 18:05:19.410403  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410414  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410292  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410430  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410322  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410475  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410476  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.410485  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.410485  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.758839302s)
	I0717 18:05:19.410517  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410521  401374 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.668826749s)
	I0717 18:05:19.410495  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.410543  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410543  401374 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 18:05:19.410552  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410582  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.535587402s)
	I0717 18:05:19.410604  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410614  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410719  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.284885594s)
	I0717 18:05:19.410736  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410744  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410818  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.145454016s)
	I0717 18:05:19.410833  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.410846  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410980  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.736797412s)
	I0717 18:05:19.410528  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	W0717 18:05:19.411004  401374 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:05:19.411047  401374 retry.go:31] will retry after 168.566382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:05:19.411130  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.102770453s)
	I0717 18:05:19.411149  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411157  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411244  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411264  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411271  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411279  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411286  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411325  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411327  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411342  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411353  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411355  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411362  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411370  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411388  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411396  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411403  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411409  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.411447  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.411454  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.411461  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.411467  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.410523  401374 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.668904904s)
	I0717 18:05:19.412608  401374 node_ready.go:35] waiting up to 6m0s for node "addons-453453" to be "Ready" ...
	I0717 18:05:19.412769  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.412803  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.412813  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.412824  401374 addons.go:475] Verifying addon ingress=true in "addons-453453"
	I0717 18:05:19.413092  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413131  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413140  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413168  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413189  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413201  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.413216  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.413289  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413300  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413306  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.413309  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.413328  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.413336  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.415035  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.415086  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.415099  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.415323  401374 out.go:177] * Verifying ingress addon...
	I0717 18:05:19.416180  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416234  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416241  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416249  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416259  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416310  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.411342  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416349  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416358  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416364  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416458  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416466  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416469  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416550  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416561  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416570  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416578  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.416584  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416586  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.416593  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.416789  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416803  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.416834  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.416842  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.417281  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.417312  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.417326  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.417335  401374 addons.go:475] Verifying addon registry=true in "addons-453453"
	I0717 18:05:19.417627  401374 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 18:05:19.418377  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.418389  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.418398  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.418406  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.418469  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.418572  401374 out.go:177] * Verifying registry addon...
	I0717 18:05:19.419932  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:19.419959  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.419968  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.419977  401374 addons.go:475] Verifying addon metrics-server=true in "addons-453453"
	I0717 18:05:19.420657  401374 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-453453 service yakd-dashboard -n yakd-dashboard
	
	I0717 18:05:19.421001  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 18:05:19.438924  401374 node_ready.go:49] node "addons-453453" has status "Ready":"True"
	I0717 18:05:19.438949  401374 node_ready.go:38] duration metric: took 26.318434ms for node "addons-453453" to be "Ready" ...
	I0717 18:05:19.438959  401374 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:05:19.448510  401374 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 18:05:19.448542  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:19.465568  401374 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 18:05:19.465588  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:19.503823  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.503860  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.504263  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.504288  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.504292  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	W0717 18:05:19.504406  401374 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 18:05:19.509572  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:19.509596  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:19.509948  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:19.509972  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:19.526402  401374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:19.580155  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:05:19.919639  401374 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-453453" context rescaled to 1 replicas
	I0717 18:05:19.922309  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:19.926451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:20.422789  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:20.425682  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:20.929933  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:20.935331  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:21.348119  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.310282234s)
	I0717 18:05:21.348198  401374 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.850237519s)
	I0717 18:05:21.348201  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.348349  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.348687  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.348750  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.348765  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.348763  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.348773  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.349063  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.349140  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.349157  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.349174  401374 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-453453"
	I0717 18:05:21.349549  401374 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 18:05:21.350536  401374 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 18:05:21.351678  401374 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 18:05:21.352769  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 18:05:21.352872  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 18:05:21.352889  401374 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 18:05:21.382926  401374 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:05:21.382950  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:21.422004  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:21.426579  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:21.488468  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 18:05:21.488508  401374 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 18:05:21.531805  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:21.574650  401374 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:05:21.574679  401374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 18:05:21.621027  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.040817998s)
	I0717 18:05:21.621094  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.621113  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.621455  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.621524  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.621547  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.621569  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:21.621584  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:21.621864  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:21.621903  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:21.621919  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:21.633791  401374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:05:21.858969  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:21.922683  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:21.925171  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:22.372735  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:22.434345  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:22.465238  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:22.642373  401374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.00854154s)
	I0717 18:05:22.642433  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:22.642451  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:22.642820  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:22.642873  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:22.642888  401374 main.go:141] libmachine: Making call to close driver server
	I0717 18:05:22.642913  401374 main.go:141] libmachine: (addons-453453) Calling .Close
	I0717 18:05:22.643175  401374 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:05:22.643199  401374 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:05:22.643216  401374 main.go:141] libmachine: (addons-453453) DBG | Closing plugin on server side
	I0717 18:05:22.645232  401374 addons.go:475] Verifying addon gcp-auth=true in "addons-453453"
	I0717 18:05:22.647743  401374 out.go:177] * Verifying gcp-auth addon...
	I0717 18:05:22.649939  401374 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 18:05:22.663063  401374 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 18:05:22.663082  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:22.858828  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:22.922078  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:22.928259  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:23.154174  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:23.442686  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:23.444436  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:23.445520  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:23.537801  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:23.664413  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:23.860751  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:23.923064  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:23.926006  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:24.153254  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:24.358577  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:24.422258  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:24.425438  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:24.654527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:24.858873  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:24.922474  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:24.925158  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:25.154439  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:25.358637  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:25.426154  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:25.428701  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:25.653842  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:25.859781  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:25.922525  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:25.925804  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:26.035619  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:26.154277  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:26.360922  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:26.423295  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:26.425835  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:26.653696  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:26.859650  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:26.923060  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:26.926556  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:27.155059  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:27.358355  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:27.422714  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:27.425920  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:27.654234  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:27.859310  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:27.922935  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:27.926079  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.035834  401374 pod_ready.go:102] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:28.154292  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:28.358775  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:28.433538  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.438546  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:28.533095  401374 pod_ready.go:97] pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.136 HostIPs:[{IP:192.168.39
.136}] PodIP: PodIPs:[] StartTime:2024-07-17 18:05:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-17 18:05:15 +0000 UTC,FinishedAt:2024-07-17 18:05:25 +0000 UTC,ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf Started:0xc001fa93d0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0717 18:05:28.533128  401374 pod_ready.go:81] duration metric: took 9.006702628s for pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace to be "Ready" ...
	E0717 18:05:28.533141  401374 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-4htvx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:28 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-17 18:05:11 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.136 HostIPs:[{IP:192.168.39.136}] PodIP: PodIPs:[] StartTime:2024-07-17 18:05:11 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-17 18:05:15 +0000 UTC,FinishedAt:2024-07-17 18:05:25 +0000 UTC,ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://e104951ed2a196aba5a0c41640cb6a90124bc7c26f66058d177ddf3c2b39a1bf Started:0xc001fa93d0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0717 18:05:28.533148  401374 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.541737  401374 pod_ready.go:92] pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.541757  401374 pod_ready.go:81] duration metric: took 8.601754ms for pod "coredns-7db6d8ff4d-wpzc7" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.541767  401374 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.557006  401374 pod_ready.go:92] pod "etcd-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.557029  401374 pod_ready.go:81] duration metric: took 15.255712ms for pod "etcd-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.557038  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.561798  401374 pod_ready.go:92] pod "kube-apiserver-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.561816  401374 pod_ready.go:81] duration metric: took 4.772194ms for pod "kube-apiserver-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.561825  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.569525  401374 pod_ready.go:92] pod "kube-controller-manager-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.569545  401374 pod_ready.go:81] duration metric: took 7.713728ms for pod "kube-controller-manager-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.569558  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45g92" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.653707  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:28.858941  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:28.922146  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:28.925527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:28.931064  401374 pod_ready.go:92] pod "kube-proxy-45g92" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:28.931093  401374 pod_ready.go:81] duration metric: took 361.527965ms for pod "kube-proxy-45g92" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:28.931106  401374 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.154757  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:29.330572  401374 pod_ready.go:92] pod "kube-scheduler-addons-453453" in "kube-system" namespace has status "Ready":"True"
	I0717 18:05:29.330601  401374 pod_ready.go:81] duration metric: took 399.485702ms for pod "kube-scheduler-addons-453453" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.330615  401374 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace to be "Ready" ...
	I0717 18:05:29.366414  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:29.426503  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:29.428580  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:29.654435  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:29.859182  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:29.922684  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:29.925353  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:30.156271  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:30.357484  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:30.421984  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:30.424441  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:30.653995  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:30.859102  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:30.922686  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:30.926128  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:31.156241  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:31.337259  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:31.358012  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:31.427365  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:31.427783  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:31.653616  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:31.860922  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:31.922167  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:31.925222  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.153054  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:32.358406  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:32.422751  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:32.425936  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.654938  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:32.935065  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:32.935622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:32.936920  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.155024  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:33.359113  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:33.422388  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.425475  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:33.653656  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:33.839508  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:33.858375  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:33.923056  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:33.926454  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:34.153606  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:34.358918  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:34.422205  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:34.425494  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:34.653941  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:34.858643  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:34.922990  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:34.925191  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:35.153317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:35.359936  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:35.423119  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:35.425944  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:35.654971  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:35.865131  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:35.923327  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:35.926270  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:36.154203  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:36.337780  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:36.358954  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:36.422614  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:36.425569  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:36.653900  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:36.861597  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:36.922654  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:36.925154  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:37.154501  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:37.358148  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:37.423273  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:37.426059  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:37.656181  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:37.859214  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:37.922982  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:37.933772  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:38.154223  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:38.357850  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:38.422211  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:38.425247  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:38.653446  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:38.836798  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:38.858831  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:38.921951  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:38.925261  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:39.153197  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:39.358702  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:39.424347  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:39.426412  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:39.654063  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:39.858218  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:39.922154  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:39.924684  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:40.157786  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:40.359675  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:40.422008  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:40.426294  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:40.653978  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:40.858272  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:40.921926  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:40.924591  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:41.153691  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:41.338611  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:41.359422  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:41.421421  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:41.424793  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:41.653687  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:41.858993  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:41.922314  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:41.924631  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:42.156019  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:42.359014  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:42.421676  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:42.425631  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:42.653439  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:42.858328  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:42.922566  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:42.925285  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:43.616563  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:43.616655  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:43.617008  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:43.618089  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:43.619509  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:43.653753  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:43.858635  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:43.921919  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:43.924867  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:44.154715  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:44.360708  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:44.422017  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:44.424790  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:44.653860  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:44.858478  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:44.923159  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:44.926129  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:45.153830  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:45.358634  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:45.422405  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:45.426278  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:45.654803  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:45.837439  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:45.857646  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:45.924288  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:45.927627  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:46.154440  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:46.358742  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:46.422927  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:46.426622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:46.654192  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:46.858451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:46.921628  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:46.926233  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:47.154716  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:47.359244  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:47.422266  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:47.425098  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:47.654146  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:47.838711  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:47.858622  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:47.923152  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:47.926888  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:48.154095  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:48.358095  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:48.422717  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:48.428772  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:48.653880  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:48.859032  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:48.921505  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:48.925164  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:49.154289  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:49.359374  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:49.422558  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:49.425355  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:49.653824  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:49.839244  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:49.861373  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:49.929626  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:49.930376  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:50.153619  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:50.358499  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:50.422399  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:50.425174  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:50.654315  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:50.859283  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:50.921618  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:50.925586  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:51.157434  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:51.358550  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:51.422449  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:51.425305  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:51.659122  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:51.863027  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:51.922486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:51.926207  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:52.154233  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:52.337089  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:52.358653  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:52.422178  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:52.425431  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:52.653534  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:52.858403  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:52.923532  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:52.926270  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:53.153512  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:53.359024  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:53.422613  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:53.425240  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:53.653482  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:53.858470  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:53.921294  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:53.924586  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:54.153679  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:54.337771  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:54.358525  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:54.427244  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:54.434561  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:54.653481  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:54.862390  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:54.923150  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:54.927800  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.154453  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:55.358857  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:55.421519  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:55.424743  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.653638  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:55.971461  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:55.971652  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:55.973673  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.155394  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:56.359441  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.421235  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:56.425187  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:56.654466  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:56.838382  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:56.860921  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:56.921671  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:56.925359  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:57.153106  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:57.358541  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:57.421696  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:57.425836  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:57.653562  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:57.858291  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:57.922402  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:57.925140  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:58.333161  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:58.358297  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:58.423015  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:58.425665  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:58.654078  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:58.858970  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:58.922436  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:58.925053  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:05:59.154034  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:59.336368  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:05:59.358351  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:59.422362  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:05:59.434020  401374 kapi.go:107] duration metric: took 40.013013684s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 18:05:59.654810  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:05:59.858732  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:05:59.922486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:00.153542  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:00.370121  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:00.421860  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:00.653872  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:00.858338  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:00.922660  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:01.154341  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:01.337063  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:01.363806  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:01.421963  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:01.654788  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:01.858362  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:01.925345  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:02.154279  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:02.358154  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:02.423012  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:02.654218  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:02.858521  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:02.923180  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:03.153997  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:03.358445  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:03.422270  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:03.654399  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:03.837370  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:03.860612  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:03.922428  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:04.158667  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:04.858163  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:04.858508  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:04.862503  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:04.867765  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:04.921654  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:05.153890  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:05.363473  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:05.425187  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:05.653905  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:05.837435  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:05.858451  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:05.922094  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:06.154165  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:06.360750  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:06.426124  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:06.653752  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:06.859838  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:06.922080  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:07.154870  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:07.366847  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:07.422034  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:07.654393  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:07.857630  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:07.921789  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:08.153841  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:08.337432  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:08.360166  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:08.422960  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:08.653815  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:08.858972  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:08.921754  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:09.153426  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:09.363444  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:09.422525  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:09.654275  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:09.863909  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:09.922056  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:10.154832  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:10.358056  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:10.422491  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:10.653339  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:10.836657  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:10.858639  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:10.922435  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:11.155166  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:11.357487  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:11.422076  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:11.653782  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:11.859723  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:11.923244  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:12.154667  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:12.361256  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:12.422665  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:12.653176  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:12.848194  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:12.883554  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.389224  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:13.389302  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.393590  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:13.422089  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:13.654159  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:13.858594  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:13.922236  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:14.154257  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:14.357899  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:14.422932  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:14.654235  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:14.857482  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:14.921818  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:15.153599  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:15.336356  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:15.357673  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:15.421461  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:15.654328  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:15.857881  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:15.923255  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:16.154562  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:16.363483  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:16.424898  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:16.654528  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:16.859263  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:16.921875  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:17.154474  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:17.336462  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:17.357719  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:17.421484  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:17.653298  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:17.859168  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:17.922781  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:18.153632  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:18.358581  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:18.421949  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:18.655700  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:18.858122  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:18.921783  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:19.153360  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:19.338471  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:19.357878  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:19.421541  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:19.653396  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:19.862335  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:19.922920  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:20.153714  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:20.361870  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:20.425166  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:20.654761  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:20.857816  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:20.922063  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.153635  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:21.357339  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:21.425356  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.653540  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:21.955279  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:21.957006  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:21.961758  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:22.154769  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:22.357988  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:22.421935  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:22.653734  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:22.857372  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:22.923175  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:23.154321  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:23.357375  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:23.422581  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:23.653600  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:23.868454  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:23.922877  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:24.153915  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:24.340419  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:24.357828  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:24.422360  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:24.654531  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:24.868726  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:24.932146  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:25.153918  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:25.358317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:25.422479  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:25.656301  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:25.858358  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:25.926497  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:26.154805  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:26.357968  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:26.421939  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:26.654531  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.134724  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:27.135042  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.138907  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:27.164035  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.358481  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.421486  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:27.654011  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:27.858712  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:27.922531  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:28.155575  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:28.360205  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:28.437976  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:28.654330  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:28.858382  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:28.922619  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:29.153407  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:29.396369  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:29.397864  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:29.421777  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:29.665000  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:29.857317  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:29.922257  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:30.154333  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:30.359417  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:30.422069  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:30.653823  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:30.858127  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:30.921385  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:31.154170  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:31.357527  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:31.421283  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:31.654181  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:31.848523  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:31.861173  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:31.922325  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:32.154484  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:32.358303  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:32.422566  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:32.653302  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:32.857723  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:32.922404  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:33.154335  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:33.358290  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:33.422245  401374 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:06:33.653686  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:33.857951  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:33.921882  401374 kapi.go:107] duration metric: took 1m14.504254462s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 18:06:34.153659  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:34.337858  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:34.358464  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:34.653373  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:34.859889  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:35.153978  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:35.358827  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:35.653560  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:35.858083  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:36.154525  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:36.358514  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:36.654153  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:36.836208  401374 pod_ready.go:102] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"False"
	I0717 18:06:36.857358  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.154449  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:37.338492  401374 pod_ready.go:92] pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace has status "Ready":"True"
	I0717 18:06:37.338526  401374 pod_ready.go:81] duration metric: took 1m8.007903343s for pod "metrics-server-c59844bb4-5m4fv" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.338541  401374 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.345777  401374 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace has status "Ready":"True"
	I0717 18:06:37.345801  401374 pod_ready.go:81] duration metric: took 7.25164ms for pod "nvidia-device-plugin-daemonset-h5kz7" in "kube-system" namespace to be "Ready" ...
	I0717 18:06:37.345826  401374 pod_ready.go:38] duration metric: took 1m17.906855494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:06:37.345860  401374 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:06:37.345895  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:06:37.345958  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:06:37.359663  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.467268  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:37.467293  401374 cri.go:89] found id: ""
	I0717 18:06:37.467302  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:06:37.467361  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.474729  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:06:37.474803  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:06:37.545523  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:37.545648  401374 cri.go:89] found id: ""
	I0717 18:06:37.545707  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:06:37.545774  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.553547  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:06:37.553631  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:06:37.609478  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:37.609505  401374 cri.go:89] found id: ""
	I0717 18:06:37.609515  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:06:37.609576  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.614797  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:06:37.614874  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:06:37.653467  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:06:37.684402  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:37.684430  401374 cri.go:89] found id: ""
	I0717 18:06:37.684439  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:06:37.684511  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.695308  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:06:37.695397  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:06:37.741251  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:37.741285  401374 cri.go:89] found id: ""
	I0717 18:06:37.741295  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:06:37.741351  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.748077  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:06:37.748151  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:06:37.842345  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:37.842369  401374 cri.go:89] found id: ""
	I0717 18:06:37.842378  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:06:37.842445  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:37.851317  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:06:37.851397  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:06:37.864948  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:37.939382  401374 cri.go:89] found id: ""
	I0717 18:06:37.939408  401374 logs.go:276] 0 containers: []
	W0717 18:06:37.939418  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:06:37.939429  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:06:37.939449  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:06:38.102474  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:06:38.102503  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:38.153765  401374 kapi.go:107] duration metric: took 1m15.50381932s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 18:06:38.155699  401374 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-453453 cluster.
	I0717 18:06:38.157029  401374 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 18:06:38.158346  401374 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 18:06:38.203281  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:06:38.203313  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:38.306026  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:06:38.306065  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:38.359053  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:38.406973  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:06:38.407015  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:38.510738  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:06:38.510773  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:38.567448  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:06:38.567488  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:06:38.859403  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:38.919365  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:06:38.919402  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:06:38.998768  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:38.999024  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:39.029480  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:06:39.029527  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:39.126133  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:06:39.126183  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:06:39.249805  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:06:39.249853  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:06:39.279848  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:39.279880  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:06:39.279948  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:06:39.279966  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:39.279986  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:39.279999  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:39.280008  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:06:39.359511  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:39.861150  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:40.358492  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:40.861662  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:41.358067  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:41.986710  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:42.361146  401374 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:06:42.858215  401374 kapi.go:107] duration metric: took 1m21.505444005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 18:06:42.859950  401374 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 18:06:42.861320  401374 addons.go:510] duration metric: took 1m32.110951894s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 18:06:49.280741  401374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:06:49.300712  401374 api_server.go:72] duration metric: took 1m38.550379192s to wait for apiserver process to appear ...
	I0717 18:06:49.300753  401374 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:06:49.300802  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:06:49.300871  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:06:49.342717  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:49.342739  401374 cri.go:89] found id: ""
	I0717 18:06:49.342748  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:06:49.342815  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.346989  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:06:49.347046  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:06:49.389991  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:49.390018  401374 cri.go:89] found id: ""
	I0717 18:06:49.390026  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:06:49.390079  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.394539  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:06:49.394611  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:06:49.431648  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:49.431679  401374 cri.go:89] found id: ""
	I0717 18:06:49.431691  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:06:49.431754  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.436288  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:06:49.436358  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:06:49.482361  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:49.482392  401374 cri.go:89] found id: ""
	I0717 18:06:49.482403  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:06:49.482469  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.491021  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:06:49.491105  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:06:49.540530  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:49.540564  401374 cri.go:89] found id: ""
	I0717 18:06:49.540576  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:06:49.540638  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.545021  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:06:49.545083  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:06:49.587754  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:49.587777  401374 cri.go:89] found id: ""
	I0717 18:06:49.587788  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:06:49.587839  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:06:49.592068  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:06:49.592133  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:06:49.630719  401374 cri.go:89] found id: ""
	I0717 18:06:49.630751  401374 logs.go:276] 0 containers: []
	W0717 18:06:49.630763  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:06:49.630775  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:06:49.630793  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:06:49.686998  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:49.687177  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:49.712837  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:06:49.712880  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:06:49.729337  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:06:49.729371  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:06:49.787944  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:06:49.787979  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:06:49.848075  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:06:49.848112  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:06:49.910656  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:06:49.910691  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:06:50.044022  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:06:50.044054  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:06:50.095111  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:06:50.095146  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:06:50.134636  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:06:50.134673  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:06:50.173045  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:06:50.173074  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:06:51.021034  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:06:51.021092  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:06:51.076250  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:51.076290  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:06:51.076357  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:06:51.076374  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:06:51.076386  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:06:51.076397  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:06:51.076403  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:07:01.077317  401374 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I0717 18:07:01.081975  401374 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I0717 18:07:01.083108  401374 api_server.go:141] control plane version: v1.30.2
	I0717 18:07:01.083131  401374 api_server.go:131] duration metric: took 11.782371865s to wait for apiserver health ...
	I0717 18:07:01.083140  401374 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:07:01.083162  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:07:01.083211  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:07:01.124610  401374 cri.go:89] found id: "b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:07:01.124651  401374 cri.go:89] found id: ""
	I0717 18:07:01.124662  401374 logs.go:276] 1 containers: [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68]
	I0717 18:07:01.124732  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.130070  401374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:07:01.130137  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:07:01.176369  401374 cri.go:89] found id: "b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:07:01.176401  401374 cri.go:89] found id: ""
	I0717 18:07:01.176410  401374 logs.go:276] 1 containers: [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92]
	I0717 18:07:01.176473  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.181519  401374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:07:01.181598  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:07:01.221814  401374 cri.go:89] found id: "d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:07:01.221842  401374 cri.go:89] found id: ""
	I0717 18:07:01.221852  401374 logs.go:276] 1 containers: [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26]
	I0717 18:07:01.221921  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.226065  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:07:01.226129  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:07:01.265271  401374 cri.go:89] found id: "259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:07:01.265296  401374 cri.go:89] found id: ""
	I0717 18:07:01.265307  401374 logs.go:276] 1 containers: [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5]
	I0717 18:07:01.265366  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.269699  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:07:01.269762  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:07:01.307878  401374 cri.go:89] found id: "2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:07:01.307914  401374 cri.go:89] found id: ""
	I0717 18:07:01.307924  401374 logs.go:276] 1 containers: [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3]
	I0717 18:07:01.307994  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.312097  401374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:07:01.312159  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:07:01.351048  401374 cri.go:89] found id: "35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:07:01.351079  401374 cri.go:89] found id: ""
	I0717 18:07:01.351091  401374 logs.go:276] 1 containers: [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1]
	I0717 18:07:01.351154  401374 ssh_runner.go:195] Run: which crictl
	I0717 18:07:01.355191  401374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:07:01.355271  401374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:07:01.404615  401374 cri.go:89] found id: ""
	I0717 18:07:01.404648  401374 logs.go:276] 0 containers: []
	W0717 18:07:01.404657  401374 logs.go:278] No container was found matching "kindnet"
	I0717 18:07:01.404667  401374 logs.go:123] Gathering logs for etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] ...
	I0717 18:07:01.404683  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92"
	I0717 18:07:01.460531  401374 logs.go:123] Gathering logs for kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] ...
	I0717 18:07:01.460568  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5"
	I0717 18:07:01.508356  401374 logs.go:123] Gathering logs for kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] ...
	I0717 18:07:01.508400  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3"
	I0717 18:07:01.550988  401374 logs.go:123] Gathering logs for kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] ...
	I0717 18:07:01.551021  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1"
	I0717 18:07:01.612085  401374 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:07:01.612130  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:07:02.595712  401374 logs.go:123] Gathering logs for container status ...
	I0717 18:07:02.595776  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:07:02.650570  401374 logs.go:123] Gathering logs for kubelet ...
	I0717 18:07:02.650624  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 18:07:02.703982  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:07:02.704161  401374 logs.go:138] Found kubelet problem: Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:07:02.731032  401374 logs.go:123] Gathering logs for dmesg ...
	I0717 18:07:02.731076  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:07:02.746793  401374 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:07:02.746831  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:07:02.882527  401374 logs.go:123] Gathering logs for kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] ...
	I0717 18:07:02.882584  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68"
	I0717 18:07:02.940182  401374 logs.go:123] Gathering logs for coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] ...
	I0717 18:07:02.940235  401374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26"
	I0717 18:07:02.979974  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:07:02.980010  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 18:07:02.980077  401374 out.go:239] X Problems detected in kubelet:
	W0717 18:07:02.980089  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: W0717 18:05:17.370589    1277 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	W0717 18:07:02.980099  401374 out.go:239]   Jul 17 18:05:17 addons-453453 kubelet[1277]: E0717 18:05:17.370689    1277 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-453453" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-453453' and this object
	I0717 18:07:02.980109  401374 out.go:304] Setting ErrFile to fd 2...
	I0717 18:07:02.980115  401374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:07:12.990067  401374 system_pods.go:59] 18 kube-system pods found
	I0717 18:07:12.990100  401374 system_pods.go:61] "coredns-7db6d8ff4d-wpzc7" [31ed1339-07ca-4d41-a32f-3a2b203555e1] Running
	I0717 18:07:12.990105  401374 system_pods.go:61] "csi-hostpath-attacher-0" [97417d9d-ca84-4bf4-abc2-41be2734c7ac] Running
	I0717 18:07:12.990108  401374 system_pods.go:61] "csi-hostpath-resizer-0" [943ebd63-ae02-4da1-9fec-0714f480d246] Running
	I0717 18:07:12.990112  401374 system_pods.go:61] "csi-hostpathplugin-fbs7w" [95d2c04d-7e7a-42eb-950b-e156bb27b489] Running
	I0717 18:07:12.990115  401374 system_pods.go:61] "etcd-addons-453453" [d6799cd0-a3dd-4395-9423-eff734cbe921] Running
	I0717 18:07:12.990118  401374 system_pods.go:61] "kube-apiserver-addons-453453" [912afe9d-e769-41b0-80a2-f4a3e649311a] Running
	I0717 18:07:12.990122  401374 system_pods.go:61] "kube-controller-manager-addons-453453" [0083af6a-7f5f-439c-9dde-13f8e0bf3476] Running
	I0717 18:07:12.990125  401374 system_pods.go:61] "kube-ingress-dns-minikube" [62d0dcb4-1d9b-4177-b580-84291702a582] Running
	I0717 18:07:12.990128  401374 system_pods.go:61] "kube-proxy-45g92" [287c805c-5dbe-4f01-8153-dcf0424c2edc] Running
	I0717 18:07:12.990130  401374 system_pods.go:61] "kube-scheduler-addons-453453" [9177b89e-81eb-4f2d-a7ed-46e7a240e284] Running
	I0717 18:07:12.990135  401374 system_pods.go:61] "metrics-server-c59844bb4-5m4fv" [886d3903-d44e-489c-bf8d-be11494d150b] Running
	I0717 18:07:12.990138  401374 system_pods.go:61] "nvidia-device-plugin-daemonset-h5kz7" [b8017821-48d3-427f-87a1-64e210b8ca26] Running
	I0717 18:07:12.990141  401374 system_pods.go:61] "registry-656c9c8d9c-mdcds" [2aea3a0e-bf77-437f-ada1-99cf0afc991d] Running
	I0717 18:07:12.990144  401374 system_pods.go:61] "registry-proxy-bvkbp" [ee546b39-8d72-4a83-b1f0-5d08d5ba2998] Running
	I0717 18:07:12.990146  401374 system_pods.go:61] "snapshot-controller-745499f584-dpztl" [bcd0cc4c-df7f-4853-8d86-34efa6b7ee6b] Running
	I0717 18:07:12.990150  401374 system_pods.go:61] "snapshot-controller-745499f584-n7cpl" [e8c47f09-2db9-4af5-969d-38ebec140574] Running
	I0717 18:07:12.990153  401374 system_pods.go:61] "storage-provisioner" [eb1c997d-8a91-402e-aabd-c19ce8771f6e] Running
	I0717 18:07:12.990155  401374 system_pods.go:61] "tiller-deploy-6677d64bcd-g4wtr" [05df6af2-4add-4e71-b8e0-eb055c2f28cc] Running
	I0717 18:07:12.990162  401374 system_pods.go:74] duration metric: took 11.907016272s to wait for pod list to return data ...
	I0717 18:07:12.990175  401374 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:07:12.992762  401374 default_sa.go:45] found service account: "default"
	I0717 18:07:12.992783  401374 default_sa.go:55] duration metric: took 2.602582ms for default service account to be created ...
	I0717 18:07:12.992789  401374 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:07:13.001781  401374 system_pods.go:86] 18 kube-system pods found
	I0717 18:07:13.001809  401374 system_pods.go:89] "coredns-7db6d8ff4d-wpzc7" [31ed1339-07ca-4d41-a32f-3a2b203555e1] Running
	I0717 18:07:13.001814  401374 system_pods.go:89] "csi-hostpath-attacher-0" [97417d9d-ca84-4bf4-abc2-41be2734c7ac] Running
	I0717 18:07:13.001819  401374 system_pods.go:89] "csi-hostpath-resizer-0" [943ebd63-ae02-4da1-9fec-0714f480d246] Running
	I0717 18:07:13.001823  401374 system_pods.go:89] "csi-hostpathplugin-fbs7w" [95d2c04d-7e7a-42eb-950b-e156bb27b489] Running
	I0717 18:07:13.001827  401374 system_pods.go:89] "etcd-addons-453453" [d6799cd0-a3dd-4395-9423-eff734cbe921] Running
	I0717 18:07:13.001831  401374 system_pods.go:89] "kube-apiserver-addons-453453" [912afe9d-e769-41b0-80a2-f4a3e649311a] Running
	I0717 18:07:13.001836  401374 system_pods.go:89] "kube-controller-manager-addons-453453" [0083af6a-7f5f-439c-9dde-13f8e0bf3476] Running
	I0717 18:07:13.001841  401374 system_pods.go:89] "kube-ingress-dns-minikube" [62d0dcb4-1d9b-4177-b580-84291702a582] Running
	I0717 18:07:13.001845  401374 system_pods.go:89] "kube-proxy-45g92" [287c805c-5dbe-4f01-8153-dcf0424c2edc] Running
	I0717 18:07:13.001850  401374 system_pods.go:89] "kube-scheduler-addons-453453" [9177b89e-81eb-4f2d-a7ed-46e7a240e284] Running
	I0717 18:07:13.001857  401374 system_pods.go:89] "metrics-server-c59844bb4-5m4fv" [886d3903-d44e-489c-bf8d-be11494d150b] Running
	I0717 18:07:13.001861  401374 system_pods.go:89] "nvidia-device-plugin-daemonset-h5kz7" [b8017821-48d3-427f-87a1-64e210b8ca26] Running
	I0717 18:07:13.001865  401374 system_pods.go:89] "registry-656c9c8d9c-mdcds" [2aea3a0e-bf77-437f-ada1-99cf0afc991d] Running
	I0717 18:07:13.001870  401374 system_pods.go:89] "registry-proxy-bvkbp" [ee546b39-8d72-4a83-b1f0-5d08d5ba2998] Running
	I0717 18:07:13.001874  401374 system_pods.go:89] "snapshot-controller-745499f584-dpztl" [bcd0cc4c-df7f-4853-8d86-34efa6b7ee6b] Running
	I0717 18:07:13.001880  401374 system_pods.go:89] "snapshot-controller-745499f584-n7cpl" [e8c47f09-2db9-4af5-969d-38ebec140574] Running
	I0717 18:07:13.001883  401374 system_pods.go:89] "storage-provisioner" [eb1c997d-8a91-402e-aabd-c19ce8771f6e] Running
	I0717 18:07:13.001887  401374 system_pods.go:89] "tiller-deploy-6677d64bcd-g4wtr" [05df6af2-4add-4e71-b8e0-eb055c2f28cc] Running
	I0717 18:07:13.001893  401374 system_pods.go:126] duration metric: took 9.098881ms to wait for k8s-apps to be running ...
	I0717 18:07:13.001906  401374 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:07:13.001956  401374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:07:13.019010  401374 system_svc.go:56] duration metric: took 17.095697ms WaitForService to wait for kubelet
	I0717 18:07:13.019047  401374 kubeadm.go:582] duration metric: took 2m2.268722577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:07:13.019077  401374 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:07:13.022619  401374 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:07:13.022645  401374 node_conditions.go:123] node cpu capacity is 2
	I0717 18:07:13.022658  401374 node_conditions.go:105] duration metric: took 3.575525ms to run NodePressure ...
	I0717 18:07:13.022671  401374 start.go:241] waiting for startup goroutines ...
	I0717 18:07:13.022680  401374 start.go:246] waiting for cluster config update ...
	I0717 18:07:13.022702  401374 start.go:255] writing updated cluster config ...
	I0717 18:07:13.023036  401374 ssh_runner.go:195] Run: rm -f paused
	I0717 18:07:13.076888  401374 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:07:13.079339  401374 out.go:177] * Done! kubectl is now configured to use "addons-453453" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.933290408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36304396-ca38-402f-a6a1-45bb3c56cc4b name=/runtime.v1.RuntimeService/Version
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.934578227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5afcfd4b-8504-411e-b7c7-1497a391531e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.935909862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239996935879970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5afcfd4b-8504-411e-b7c7-1497a391531e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.936358132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f0fbb68-8b9f-46f6-85d2-205f87eaec8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.936429848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f0fbb68-8b9f-46f6-85d2-205f87eaec8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.936728231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:172123957
6440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c88
72c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3,PodSandboxI
d:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb017003454d315bf
c1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940
d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:kube-controller
-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f0fbb68-8b9f-46f6-85d2-205f87eaec8b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.970050227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bea62cae-3926-4a0e-ad4b-13aaaa4d3333 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.970369826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bea62cae-3926-4a0e-ad4b-13aaaa4d3333 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.972611748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d92afa7f-f337-4d43-8176-2f8723b283af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.973972164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239996973943827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d92afa7f-f337-4d43-8176-2f8723b283af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.974466160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8813c670-ea78-44ab-aac1-6014488c0fc4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.974526232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8813c670-ea78-44ab-aac1-6014488c0fc4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.974877606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:172123957
6440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c88
72c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3,PodSandboxI
d:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb017003454d315bf
c1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940
d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:kube-controller
-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8813c670-ea78-44ab-aac1-6014488c0fc4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.988352985Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a124197a-11e8-41d6-a32e-9a09216bb627 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.988849083Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-6bfmd,Uid:9b273295-d4f1-43aa-b0ef-d148763f6593,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239809991959349,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:10:09.682567965Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&PodSandboxMetadata{Name:nginx,Uid:6918b754-82dd-4b43-acdd-204f3a8419d3,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1721239658605596693,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:07:38.287072080Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&PodSandboxMetadata{Name:headlamp-7867546754-29grz,Uid:b89e8f1b-24a4-46f3-b300-72f6c803f7d6,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239634380239621,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,pod-template-hash: 7867546754,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
07-17T18:07:14.038854020Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-7d9fn,Uid:f4619892-e5ff-45a2-b2d8-001fba539eb6,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239586837415170,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:22.619984001Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-799879c74f-rzt74,Uid:d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1721239519173863580,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,pod-template-hash: 799879c74f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:17.364020086Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-5m4fv,Uid:886d3903-d44e-489c-bf8d-be11494d150b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239517277671953,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:16.935381396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:eb1c997d-8a91-402e-aabd-c19ce8771f6e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239515906346716,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T18:05:15.562034539Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wpzc7,Uid:31ed1339-07ca-4d41-a32f-3a2b203555e1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239511586537703,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b20
3555e1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:11.205032932Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&PodSandboxMetadata{Name:kube-proxy-45g92,Uid:287c805c-5dbe-4f01-8153-dcf0424c2edc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239511564180928,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:10.909396775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940d4089f2113fd3aa,Metadata:&PodSandboxMetadata{Name:etcd-addons-453453,Uid:02c2e
7f10676afa5f4ef1ebec7d4216c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491918222610,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.136:2379,kubernetes.io/config.hash: 02c2e7f10676afa5f4ef1ebec7d4216c,kubernetes.io/config.seen: 2024-07-17T18:04:51.470029619Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3d2313a576fee8fb017003454d315bfc1d51b4f459a62148be66f22872180bc1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-453453,Uid:afda81ae2740d330017a46f45930e6fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491914725453,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-
453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: afda81ae2740d330017a46f45930e6fe,kubernetes.io/config.seen: 2024-07-17T18:04:51.470028754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-453453,Uid:2e877c929903859d77ada01f09fc28ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491910581387,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2e877c929903859d77ada01f09fc28ad,kubernetes.io/config.seen: 2024-07-17T18:04:51.470027750Z,
kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-453453,Uid:6d8e8a77892a04a0ceea7caff40574ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721239491910032408,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.136:8443,kubernetes.io/config.hash: 6d8e8a77892a04a0ceea7caff40574ef,kubernetes.io/config.seen: 2024-07-17T18:04:51.470022829Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a124197a-11e8-41d6-a32e-9a09216bb627 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.989570267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99c681fd-a2f7-4f01-b0cf-77093d9c92e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.989643101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99c681fd-a2f7-4f01-b0cf-77093d9c92e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:16 addons-453453 crio[679]: time="2024-07-17 18:13:16.990156919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:172123957
6440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c88
72c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3,PodSandboxI
d:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb017003454d315bf
c1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940
d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:kube-controller
-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99c681fd-a2f7-4f01-b0cf-77093d9c92e3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.014278306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0fecf16-6053-417a-af9e-b021f04b3965 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.014369105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0fecf16-6053-417a-af9e-b021f04b3965 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.015907161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e06c35b-a513-4360-b425-1e51231618c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.017311357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239997017286485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e06c35b-a513-4360-b425-1e51231618c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.018117887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2727ed03-9863-4130-83fb-9eb7ec25a3c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.018183249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2727ed03-9863-4130-83fb-9eb7ec25a3c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:13:17 addons-453453 crio[679]: time="2024-07-17 18:13:17.018582026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2918bb8f28eb43f891719f323c69145102d464c1c37fdaf9a33bae22afe1d1d0,PodSandboxId:33eeb5aa7d898ca3506d24a719c6b5bf2dab23a16b578b1a86d1c77127e8995d,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721239812819686025,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-6bfmd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9b273295-d4f1-43aa-b0ef-d148763f6593,},Annotations:map[string]string{io.kubernetes.container.hash: cc4c9615,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d503fb774476ce51dec196a5540f7f1a895198a9458d0ac60141eb335ebfbf0,PodSandboxId:1582aebb9ca26d07e9d5bee806549d6b91f144053e0fdb99ac6b8cd49eea4c23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721239671387457770,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6918b754-82dd-4b43-acdd-204f3a8419d3,},Annotations:map[string]string{io.kubernet
es.container.hash: fd6b8330,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07c1926654c9521721b2311606c653c2e711eaaa8cf42a672c919ad0693abd00,PodSandboxId:f18424687dfa0862df3c461ff4981f78c54951632533e881bee7b0c54528f36c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721239640261678791,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-29grz,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: b89e8f1b-24a4-46f3-b300-72f6c803f7d6,},Annotations:map[string]string{io.kubernetes.container.hash: 97be45f8,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8,PodSandboxId:63ff63a9bff7c4100e37fbbba69011f462ee746505f9d36bd2c197cc815f02f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721239597302212555,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-7d9fn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f4619892-e5ff-45a2-b2d8-001fba539eb6,},Annotations:map[string]string{io.kubernetes.container.hash: cad87a,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd1d69251f6e026b4cf79c0521e11f64b4609e4146b065cc5ee67d8dcccf748,PodSandboxId:63d16d4983e01940b1c9bf89a1c488f3c2f91108d6f2e60c03c12fd13bb4c25b,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:172123957
6440596851,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-rzt74,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca,},Annotations:map[string]string{io.kubernetes.container.hash: 42a4325,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:146717e5268df0db8d0ce177bc7074a55e0db1207cef215c28d8f43de6ae334c,PodSandboxId:a668b50ae04ad5c7a9958f97f583af7bc92134e6341a4dc4de1f27b2c5b082a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721239530674726002,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5m4fv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 886d3903-d44e-489c-bf8d-be11494d150b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f57834,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18,PodSandboxId:3e48d5f320a76dde15ae3ab63d1aab2ff919abba7de3033c4aea635948167ada,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c88
72c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239516829292897,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c997d-8a91-402e-aabd-c19ce8771f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 84ed994d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26,PodSandboxId:f614224ac1b46f7af8481679f35918eeac2fb4ef89cbf76d9f8d1812de938c2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239514989989573,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wpzc7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31ed1339-07ca-4d41-a32f-3a2b203555e1,},Annotations:map[string]string{io.kubernetes.container.hash: 6569530c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3,PodSandboxI
d:aeff920decc5bc2cb937abb066b6256fcfb03b046111322cc884fc6c5a0a9fe1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239512300888305,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45g92,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c805c-5dbe-4f01-8153-dcf0424c2edc,},Annotations:map[string]string{io.kubernetes.container.hash: 28b1b38d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5,PodSandboxId:3d2313a576fee8fb017003454d315bf
c1d51b4f459a62148be66f22872180bc1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239492232507012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afda81ae2740d330017a46f45930e6fe,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92,PodSandboxId:921f3b320d6a0e8254997b2b1e50e6e332583a9cfe2570940
d4089f2113fd3aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239492219429708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02c2e7f10676afa5f4ef1ebec7d4216c,},Annotations:map[string]string{io.kubernetes.container.hash: cdfb65b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68,PodSandboxId:25fc7e6805bbe84d9443801dda9edc9b3bf49d2ff0f49271e5249f0d61a57b87,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239492203326163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d8e8a77892a04a0ceea7caff40574ef,},Annotations:map[string]string{io.kubernetes.container.hash: 427c8812,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1,PodSandboxId:f5dc0e184131d22f823a74eced70a8fe39b415b24913db129a8259c3d03e707a,Metadata:&ContainerMetadata{Name:kube-controller
-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239492071345398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-453453,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e877c929903859d77ada01f09fc28ad,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2727ed03-9863-4130-83fb-9eb7ec25a3c3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2918bb8f28eb4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   33eeb5aa7d898       hello-world-app-6778b5fc9f-6bfmd
	3d503fb774476       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   1582aebb9ca26       nginx
	07c1926654c95       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   f18424687dfa0       headlamp-7867546754-29grz
	5ab3688fd15de       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   63ff63a9bff7c       gcp-auth-5db96cd9b4-7d9fn
	1dd1d69251f6e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         7 minutes ago       Running             yakd                      0                   63d16d4983e01       yakd-dashboard-799879c74f-rzt74
	146717e5268df       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   a668b50ae04ad       metrics-server-c59844bb4-5m4fv
	3bfb23522a04a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   3e48d5f320a76       storage-provisioner
	d45bcf1eb6bad       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   f614224ac1b46       coredns-7db6d8ff4d-wpzc7
	2fb69b3eff0c8       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        8 minutes ago       Running             kube-proxy                0                   aeff920decc5b       kube-proxy-45g92
	259069889e9e8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        8 minutes ago       Running             kube-scheduler            0                   3d2313a576fee       kube-scheduler-addons-453453
	b698fb331680e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   921f3b320d6a0       etcd-addons-453453
	b0a42f1bfe6fa       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        8 minutes ago       Running             kube-apiserver            0                   25fc7e6805bbe       kube-apiserver-addons-453453
	35a820bcebd02       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        8 minutes ago       Running             kube-controller-manager   0                   f5dc0e184131d       kube-controller-manager-addons-453453
	
	
	==> coredns [d45bcf1eb6bad9c02f64a63784eda04d6192a3027a20431609562a6c2eefad26] <==
	[INFO] 10.244.0.7:40475 - 33247 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125668s
	[INFO] 10.244.0.7:52645 - 5955 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000080304s
	[INFO] 10.244.0.7:52645 - 16449 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000160659s
	[INFO] 10.244.0.7:50139 - 52436 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067486s
	[INFO] 10.244.0.7:50139 - 5077 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050159s
	[INFO] 10.244.0.7:55040 - 80 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007855s
	[INFO] 10.244.0.7:55040 - 59734 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048133s
	[INFO] 10.244.0.7:39990 - 42144 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000184508s
	[INFO] 10.244.0.7:39990 - 24482 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097731s
	[INFO] 10.244.0.7:45949 - 7974 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00011024s
	[INFO] 10.244.0.7:45949 - 44068 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.001625718s
	[INFO] 10.244.0.7:34057 - 24261 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077898s
	[INFO] 10.244.0.7:34057 - 55482 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031736s
	[INFO] 10.244.0.7:46820 - 64246 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066173s
	[INFO] 10.244.0.7:46820 - 53744 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009971s
	[INFO] 10.244.0.22:60535 - 52797 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000347098s
	[INFO] 10.244.0.22:55677 - 45067 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090447s
	[INFO] 10.244.0.22:49722 - 20404 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082103s
	[INFO] 10.244.0.22:34998 - 11278 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056523s
	[INFO] 10.244.0.22:59034 - 50331 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000061046s
	[INFO] 10.244.0.22:58850 - 18232 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058025s
	[INFO] 10.244.0.22:50257 - 13348 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000437738s
	[INFO] 10.244.0.22:36407 - 56893 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000805971s
	[INFO] 10.244.0.25:51907 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000531651s
	[INFO] 10.244.0.25:56952 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146342s
	
	
	==> describe nodes <==
	Name:               addons-453453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-453453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=addons-453453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_04_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-453453
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-453453
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:13:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:10:35 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:10:35 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:10:35 +0000   Wed, 17 Jul 2024 18:04:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:10:35 +0000   Wed, 17 Jul 2024 18:04:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    addons-453453
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb6dc61fd889454e95ace36bd2204ff5
	  System UUID:                eb6dc61f-d889-454e-95ac-e36bd2204ff5
	  Boot ID:                    e850f4c2-d1e4-4c24-8b9f-0a02de591062
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-6bfmd         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  gcp-auth                    gcp-auth-5db96cd9b4-7d9fn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  headlamp                    headlamp-7867546754-29grz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 coredns-7db6d8ff4d-wpzc7                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m6s
	  kube-system                 etcd-addons-453453                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-apiserver-addons-453453             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-controller-manager-addons-453453    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-proxy-45g92                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-scheduler-addons-453453             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  yakd-dashboard              yakd-dashboard-799879c74f-rzt74          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m3s                   kube-proxy       
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node addons-453453 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node addons-453453 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x7 over 8m26s)  kubelet          Node addons-453453 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet          Node addons-453453 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet          Node addons-453453 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s                  kubelet          Node addons-453453 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m19s                  kubelet          Node addons-453453 status is now: NodeReady
	  Normal  RegisteredNode           8m8s                   node-controller  Node addons-453453 event: Registered Node addons-453453 in Controller
	
	
	==> dmesg <==
	[Jul17 18:05] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.862494] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[  +5.158968] kauditd_printk_skb: 111 callbacks suppressed
	[  +5.072495] kauditd_printk_skb: 136 callbacks suppressed
	[  +6.950898] kauditd_printk_skb: 72 callbacks suppressed
	[  +5.981210] kauditd_printk_skb: 6 callbacks suppressed
	[ +25.113127] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 18:06] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.264345] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.107585] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.795056] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.028091] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.048063] kauditd_printk_skb: 7 callbacks suppressed
	[Jul17 18:07] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.244030] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.731581] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.022055] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.460118] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.331484] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.023422] kauditd_printk_skb: 8 callbacks suppressed
	[Jul17 18:08] kauditd_printk_skb: 30 callbacks suppressed
	[  +6.444456] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.890358] kauditd_printk_skb: 41 callbacks suppressed
	[Jul17 18:10] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.092949] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [b698fb331680ea3e2eb6b72768d701f550390acb4310ed9ebafb2c065ad3fa92] <==
	{"level":"warn","ts":"2024-07-17T18:06:27.110882Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"350.889447ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46\" ","response":"range_response_count:1 size:3634"}
	{"level":"info","ts":"2024-07-17T18:06:27.110931Z","caller":"traceutil/trace.go:171","msg":"trace[160697363] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46; range_end:; response_count:1; response_revision:1125; }","duration":"350.973692ms","start":"2024-07-17T18:06:26.759944Z","end":"2024-07-17T18:06:27.110918Z","steps":["trace[160697363] 'agreement among raft nodes before linearized reading'  (duration: 350.785965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.110971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:06:26.759932Z","time spent":"351.030953ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3656,"request content":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-csc46\" "}
	{"level":"warn","ts":"2024-07-17T18:06:27.111104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"285.287972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-5m4fv\" ","response":"range_response_count:1 size:4461"}
	{"level":"info","ts":"2024-07-17T18:06:27.11114Z","caller":"traceutil/trace.go:171","msg":"trace[793819518] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-5m4fv; range_end:; response_count:1; response_revision:1125; }","duration":"285.401931ms","start":"2024-07-17T18:06:26.825732Z","end":"2024-07-17T18:06:27.111134Z","steps":["trace[793819518] 'agreement among raft nodes before linearized reading'  (duration: 285.332056ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.111862Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.681921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85652"}
	{"level":"info","ts":"2024-07-17T18:06:27.111907Z","caller":"traceutil/trace.go:171","msg":"trace[189773330] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1125; }","duration":"260.749293ms","start":"2024-07-17T18:06:26.851149Z","end":"2024-07-17T18:06:27.111899Z","steps":["trace[189773330] 'agreement among raft nodes before linearized reading'  (duration: 260.526867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:27.1152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.801845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-17T18:06:27.115242Z","caller":"traceutil/trace.go:171","msg":"trace[342097142] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1125; }","duration":"202.847967ms","start":"2024-07-17T18:06:26.912387Z","end":"2024-07-17T18:06:27.115235Z","steps":["trace[342097142] 'agreement among raft nodes before linearized reading'  (duration: 198.860277ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:06:29.656076Z","caller":"traceutil/trace.go:171","msg":"trace[1958085727] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"199.158334ms","start":"2024-07-17T18:06:29.456903Z","end":"2024-07-17T18:06:29.656061Z","steps":["trace[1958085727] 'process raft request'  (duration: 198.750552ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:06:41.972723Z","caller":"traceutil/trace.go:171","msg":"trace[1698292236] linearizableReadLoop","detail":"{readStateIndex:1249; appliedIndex:1248; }","duration":"237.331853ms","start":"2024-07-17T18:06:41.735364Z","end":"2024-07-17T18:06:41.972696Z","steps":["trace[1698292236] 'read index received'  (duration: 237.171461ms)","trace[1698292236] 'applied index is now lower than readState.Index'  (duration: 159.727µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:06:41.972862Z","caller":"traceutil/trace.go:171","msg":"trace[13708761] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"423.839222ms","start":"2024-07-17T18:06:41.549017Z","end":"2024-07-17T18:06:41.972856Z","steps":["trace[13708761] 'process raft request'  (duration: 423.534734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973061Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:06:41.549001Z","time spent":"423.944296ms","remote":"127.0.0.1:56896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1167 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-07-17T18:06:41.973207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.859096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T18:06:41.973251Z","caller":"traceutil/trace.go:171","msg":"trace[471786989] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1213; }","duration":"237.919666ms","start":"2024-07-17T18:06:41.735323Z","end":"2024-07-17T18:06:41.973243Z","steps":["trace[471786989] 'agreement among raft nodes before linearized reading'  (duration: 237.826974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.51641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gadget/gadget-dd2tz.17e3123937aca04e\" ","response":"range_response_count:1 size:808"}
	{"level":"info","ts":"2024-07-17T18:06:41.973471Z","caller":"traceutil/trace.go:171","msg":"trace[1903211114] range","detail":"{range_begin:/registry/events/gadget/gadget-dd2tz.17e3123937aca04e; range_end:; response_count:1; response_revision:1213; }","duration":"172.595156ms","start":"2024-07-17T18:06:41.800867Z","end":"2024-07-17T18:06:41.973462Z","steps":["trace[1903211114] 'agreement among raft nodes before linearized reading'  (duration: 172.476341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:06:41.973617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.772158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85460"}
	{"level":"info","ts":"2024-07-17T18:06:41.973666Z","caller":"traceutil/trace.go:171","msg":"trace[1968914310] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1213; }","duration":"125.828626ms","start":"2024-07-17T18:06:41.847821Z","end":"2024-07-17T18:06:41.973649Z","steps":["trace[1968914310] 'agreement among raft nodes before linearized reading'  (duration: 125.628559ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:07:39.372456Z","caller":"traceutil/trace.go:171","msg":"trace[1997254569] linearizableReadLoop","detail":"{readStateIndex:1591; appliedIndex:1590; }","duration":"270.673324ms","start":"2024-07-17T18:07:39.101743Z","end":"2024-07-17T18:07:39.372416Z","steps":["trace[1997254569] 'read index received'  (duration: 270.355179ms)","trace[1997254569] 'applied index is now lower than readState.Index'  (duration: 317.687µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:07:39.37286Z","caller":"traceutil/trace.go:171","msg":"trace[687251374] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"326.275507ms","start":"2024-07-17T18:07:39.046569Z","end":"2024-07-17T18:07:39.372844Z","steps":["trace[687251374] 'process raft request'  (duration: 325.56784ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:07:39.373029Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:07:39.046554Z","time spent":"326.36825ms","remote":"127.0.0.1:56800","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3419,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" mod_revision:1535 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" value_size:3349 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-jtcdk\" > >"}
	{"level":"warn","ts":"2024-07-17T18:07:39.374016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.268417ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:12969"}
	{"level":"info","ts":"2024-07-17T18:07:39.374813Z","caller":"traceutil/trace.go:171","msg":"trace[731522879] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1536; }","duration":"273.022005ms","start":"2024-07-17T18:07:39.101713Z","end":"2024-07-17T18:07:39.374735Z","steps":["trace[731522879] 'agreement among raft nodes before linearized reading'  (duration: 271.381086ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:07:49.360129Z","caller":"traceutil/trace.go:171","msg":"trace[1608737015] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"118.640204ms","start":"2024-07-17T18:07:49.241468Z","end":"2024-07-17T18:07:49.360108Z","steps":["trace[1608737015] 'process raft request'  (duration: 118.422953ms)"],"step_count":1}
	
	
	==> gcp-auth [5ab3688fd15de45b760f574e9673fa61a7686ac369815e917070b3418d588be8] <==
	2024/07/17 18:06:37 GCP Auth Webhook started!
	2024/07/17 18:07:13 Ready to marshal response ...
	2024/07/17 18:07:13 Ready to write response ...
	2024/07/17 18:07:13 Ready to marshal response ...
	2024/07/17 18:07:13 Ready to write response ...
	2024/07/17 18:07:14 Ready to marshal response ...
	2024/07/17 18:07:14 Ready to write response ...
	2024/07/17 18:07:24 Ready to marshal response ...
	2024/07/17 18:07:24 Ready to write response ...
	2024/07/17 18:07:24 Ready to marshal response ...
	2024/07/17 18:07:24 Ready to write response ...
	2024/07/17 18:07:30 Ready to marshal response ...
	2024/07/17 18:07:30 Ready to write response ...
	2024/07/17 18:07:30 Ready to marshal response ...
	2024/07/17 18:07:30 Ready to write response ...
	2024/07/17 18:07:36 Ready to marshal response ...
	2024/07/17 18:07:36 Ready to write response ...
	2024/07/17 18:07:38 Ready to marshal response ...
	2024/07/17 18:07:38 Ready to write response ...
	2024/07/17 18:07:49 Ready to marshal response ...
	2024/07/17 18:07:49 Ready to write response ...
	2024/07/17 18:08:13 Ready to marshal response ...
	2024/07/17 18:08:13 Ready to write response ...
	2024/07/17 18:10:09 Ready to marshal response ...
	2024/07/17 18:10:09 Ready to write response ...
	
	
	==> kernel <==
	 18:13:17 up 8 min,  0 users,  load average: 0.35, 0.67, 0.49
	Linux addons-453453 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b0a42f1bfe6faf3816fec26703b75c51c275cf53e41cb0b14e55e19a59b56d68] <==
	E0717 18:06:37.311323       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	E0717 18:06:37.312034       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	E0717 18:06:37.321902       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.45.40:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.45.40:443: connect: connection refused
	I0717 18:06:37.425635       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 18:07:13.910086       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.77.54"}
	I0717 18:07:32.594045       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 18:07:33.638580       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 18:07:38.096693       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 18:07:38.345252       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.118.236"}
	I0717 18:07:52.163105       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 18:08:05.607490       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 18:08:29.172443       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.172502       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.206878       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.206948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.228930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.228979       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.229719       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.229831       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:08:29.255268       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:08:29.255318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 18:08:30.230189       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 18:08:30.256288       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 18:08:30.270879       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 18:10:09.835299       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.41.106"}
	
	
	==> kube-controller-manager [35a820bcebd023aa8b7ba05d9ccdf94c1b8ffdd13150bf47b2237c012310bfe1] <==
	W0717 18:11:07.964908       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:11:07.964998       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:11:17.694417       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:11:17.694605       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:11:41.503267       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:11:41.503339       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:11:52.035844       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:11:52.035900       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:01.566207       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:01.566328       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:02.929475       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:02.929600       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:19.889985       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:19.890103       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:22.792996       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:22.793030       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:33.001822       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:33.002010       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:12:59.146012       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:12:59.146089       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:13:02.072130       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:13:02.072323       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:13:08.900815       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:13:08.900981       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 18:13:15.917296       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.402µs"
	
	
	==> kube-proxy [2fb69b3eff0c898fb2eabde3e7ad2a124e3b4d429acd10e29ccdd313d00942f3] <==
	I0717 18:05:13.926347       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:05:14.044500       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.136"]
	I0717 18:05:14.224118       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:05:14.224155       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:05:14.224174       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:05:14.230933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:05:14.231144       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:05:14.231156       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:05:14.243337       1 config.go:192] "Starting service config controller"
	I0717 18:05:14.243354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:05:14.243378       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:05:14.243381       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:05:14.248450       1 config.go:319] "Starting node config controller"
	I0717 18:05:14.248462       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:05:14.350824       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:05:14.350868       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:05:14.351083       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [259069889e9e8ca2eebaa5ec6047c30c6e33f0ce7f24861acdc9b3a5c7a59ca5] <==
	W0717 18:04:55.818147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:04:55.818190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:04:55.985082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:04:55.985173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:04:55.995880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:04:55.996586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:04:56.052927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.053008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.059173       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:04:56.059264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:04:56.081055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:04:56.081099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:04:56.141145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:04:56.141277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:04:56.184643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.184736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.202568       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:04:56.202652       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:04:56.219280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:04:56.219369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:04:56.248335       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:04:56.248499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:04:56.270013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:04:56.270104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:04:58.651442       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.266268    1277 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a322614a-b8dc-4486-8666-27a4d1165a14-webhook-cert\") on node \"addons-453453\" DevicePath \"\""
	Jul 17 18:10:15 addons-453453 kubelet[1277]: I0717 18:10:15.800391    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a322614a-b8dc-4486-8666-27a4d1165a14" path="/var/lib/kubelet/pods/a322614a-b8dc-4486-8666-27a4d1165a14/volumes"
	Jul 17 18:10:57 addons-453453 kubelet[1277]: E0717 18:10:57.817166    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:10:57 addons-453453 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:10:57 addons-453453 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:10:57 addons-453453 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:10:57 addons-453453 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:10:59 addons-453453 kubelet[1277]: I0717 18:10:59.210867    1277 scope.go:117] "RemoveContainer" containerID="afe10994e3ef10113722afb027166ae7c7fd120e44220f3baf1465d3ad46cfa7"
	Jul 17 18:10:59 addons-453453 kubelet[1277]: I0717 18:10:59.225476    1277 scope.go:117] "RemoveContainer" containerID="6f3d361df604d29a762cdbf9eaddd32323ae5e12b4251aec829f29894647d049"
	Jul 17 18:11:57 addons-453453 kubelet[1277]: E0717 18:11:57.817983    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:11:57 addons-453453 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:11:57 addons-453453 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:11:57 addons-453453 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:11:57 addons-453453 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:12:57 addons-453453 kubelet[1277]: E0717 18:12:57.817238    1277 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:12:57 addons-453453 kubelet[1277]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:12:57 addons-453453 kubelet[1277]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:12:57 addons-453453 kubelet[1277]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:12:57 addons-453453 kubelet[1277]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.343112    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/886d3903-d44e-489c-bf8d-be11494d150b-tmp-dir\") pod \"886d3903-d44e-489c-bf8d-be11494d150b\" (UID: \"886d3903-d44e-489c-bf8d-be11494d150b\") "
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.343165    1277 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlhw9\" (UniqueName: \"kubernetes.io/projected/886d3903-d44e-489c-bf8d-be11494d150b-kube-api-access-vlhw9\") pod \"886d3903-d44e-489c-bf8d-be11494d150b\" (UID: \"886d3903-d44e-489c-bf8d-be11494d150b\") "
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.343885    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886d3903-d44e-489c-bf8d-be11494d150b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "886d3903-d44e-489c-bf8d-be11494d150b" (UID: "886d3903-d44e-489c-bf8d-be11494d150b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.354293    1277 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886d3903-d44e-489c-bf8d-be11494d150b-kube-api-access-vlhw9" (OuterVolumeSpecName: "kube-api-access-vlhw9") pod "886d3903-d44e-489c-bf8d-be11494d150b" (UID: "886d3903-d44e-489c-bf8d-be11494d150b"). InnerVolumeSpecName "kube-api-access-vlhw9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.443486    1277 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/886d3903-d44e-489c-bf8d-be11494d150b-tmp-dir\") on node \"addons-453453\" DevicePath \"\""
	Jul 17 18:13:17 addons-453453 kubelet[1277]: I0717 18:13:17.443590    1277 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vlhw9\" (UniqueName: \"kubernetes.io/projected/886d3903-d44e-489c-bf8d-be11494d150b-kube-api-access-vlhw9\") on node \"addons-453453\" DevicePath \"\""
	
	
	==> storage-provisioner [3bfb23522a04a0b30f4683eb4e6f062603e4e822ef53c669efc17930b868dc18] <==
	I0717 18:05:18.735487       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:05:18.829905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:05:18.829965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:05:18.863117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:05:18.864577       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"445c8ab1-cca9-4774-8f14-886e434338d5", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0 became leader
	I0717 18:05:18.893051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0!
	I0717 18:05:18.997440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-453453_6ef61689-9b7b-4351-84e3-8a1a74c71fe0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-453453 -n addons-453453
helpers_test.go:261: (dbg) Run:  kubectl --context addons-453453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (339.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-453453
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-453453: exit status 82 (2m0.469630274s)

                                                
                                                
-- stdout --
	* Stopping node "addons-453453"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-453453" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-453453
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-453453: exit status 11 (21.552908061s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-453453" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-453453
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-453453: exit status 11 (6.143963689s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-453453" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-453453
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-453453: exit status 11 (6.144363516s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.136:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-453453" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 node stop m02 -v=7 --alsologtostderr
E0717 18:25:46.914924  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:26:27.875385  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:27:13.091322  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.462417491s)

                                                
                                                
-- stdout --
	* Stopping node "ha-445282-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:25:46.375387  415699 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:25:46.375506  415699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:25:46.375514  415699 out.go:304] Setting ErrFile to fd 2...
	I0717 18:25:46.375518  415699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:25:46.375727  415699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:25:46.375962  415699 mustload.go:65] Loading cluster: ha-445282
	I0717 18:25:46.376291  415699 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:25:46.376307  415699 stop.go:39] StopHost: ha-445282-m02
	I0717 18:25:46.376733  415699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:25:46.376777  415699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:25:46.393344  415699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0717 18:25:46.393817  415699 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:25:46.394330  415699 main.go:141] libmachine: Using API Version  1
	I0717 18:25:46.394368  415699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:25:46.394773  415699 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:25:46.396783  415699 out.go:177] * Stopping node "ha-445282-m02"  ...
	I0717 18:25:46.398086  415699 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:25:46.398125  415699 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:25:46.398369  415699 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:25:46.398405  415699 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:25:46.401155  415699 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:25:46.401525  415699 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:25:46.401566  415699 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:25:46.401668  415699 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:25:46.401837  415699 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:25:46.401991  415699 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:25:46.402127  415699 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:25:46.488706  415699 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:25:46.543638  415699 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:25:46.597674  415699 main.go:141] libmachine: Stopping "ha-445282-m02"...
	I0717 18:25:46.597728  415699 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:25:46.599300  415699 main.go:141] libmachine: (ha-445282-m02) Calling .Stop
	I0717 18:25:46.603317  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 0/120
	I0717 18:25:47.605634  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 1/120
	I0717 18:25:48.606776  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 2/120
	I0717 18:25:49.607916  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 3/120
	I0717 18:25:50.609859  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 4/120
	I0717 18:25:51.612016  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 5/120
	I0717 18:25:52.613356  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 6/120
	I0717 18:25:53.615044  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 7/120
	I0717 18:25:54.616495  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 8/120
	I0717 18:25:55.617835  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 9/120
	I0717 18:25:56.620046  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 10/120
	I0717 18:25:57.621333  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 11/120
	I0717 18:25:58.623107  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 12/120
	I0717 18:25:59.625129  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 13/120
	I0717 18:26:00.626472  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 14/120
	I0717 18:26:01.628370  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 15/120
	I0717 18:26:02.629629  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 16/120
	I0717 18:26:03.631086  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 17/120
	I0717 18:26:04.632585  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 18/120
	I0717 18:26:05.634011  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 19/120
	I0717 18:26:06.636076  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 20/120
	I0717 18:26:07.637660  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 21/120
	I0717 18:26:08.639442  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 22/120
	I0717 18:26:09.640784  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 23/120
	I0717 18:26:10.641999  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 24/120
	I0717 18:26:11.643817  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 25/120
	I0717 18:26:12.645146  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 26/120
	I0717 18:26:13.646869  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 27/120
	I0717 18:26:14.648359  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 28/120
	I0717 18:26:15.649581  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 29/120
	I0717 18:26:16.651649  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 30/120
	I0717 18:26:17.652937  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 31/120
	I0717 18:26:18.654980  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 32/120
	I0717 18:26:19.656457  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 33/120
	I0717 18:26:20.657803  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 34/120
	I0717 18:26:21.659595  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 35/120
	I0717 18:26:22.661077  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 36/120
	I0717 18:26:23.662375  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 37/120
	I0717 18:26:24.663691  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 38/120
	I0717 18:26:25.665415  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 39/120
	I0717 18:26:26.666703  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 40/120
	I0717 18:26:27.668092  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 41/120
	I0717 18:26:28.669394  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 42/120
	I0717 18:26:29.671312  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 43/120
	I0717 18:26:30.672675  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 44/120
	I0717 18:26:31.674747  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 45/120
	I0717 18:26:32.676043  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 46/120
	I0717 18:26:33.678045  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 47/120
	I0717 18:26:34.679537  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 48/120
	I0717 18:26:35.681178  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 49/120
	I0717 18:26:36.683023  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 50/120
	I0717 18:26:37.684420  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 51/120
	I0717 18:26:38.685754  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 52/120
	I0717 18:26:39.687225  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 53/120
	I0717 18:26:40.688516  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 54/120
	I0717 18:26:41.689830  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 55/120
	I0717 18:26:42.691354  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 56/120
	I0717 18:26:43.693102  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 57/120
	I0717 18:26:44.694759  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 58/120
	I0717 18:26:45.696581  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 59/120
	I0717 18:26:46.698761  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 60/120
	I0717 18:26:47.700346  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 61/120
	I0717 18:26:48.701916  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 62/120
	I0717 18:26:49.703140  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 63/120
	I0717 18:26:50.705575  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 64/120
	I0717 18:26:51.707131  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 65/120
	I0717 18:26:52.708626  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 66/120
	I0717 18:26:53.710029  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 67/120
	I0717 18:26:54.711347  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 68/120
	I0717 18:26:55.713131  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 69/120
	I0717 18:26:56.715570  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 70/120
	I0717 18:26:57.716853  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 71/120
	I0717 18:26:58.718793  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 72/120
	I0717 18:26:59.720295  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 73/120
	I0717 18:27:00.721656  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 74/120
	I0717 18:27:01.723023  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 75/120
	I0717 18:27:02.724273  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 76/120
	I0717 18:27:03.725578  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 77/120
	I0717 18:27:04.727430  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 78/120
	I0717 18:27:05.728759  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 79/120
	I0717 18:27:06.730894  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 80/120
	I0717 18:27:07.732570  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 81/120
	I0717 18:27:08.733932  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 82/120
	I0717 18:27:09.735572  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 83/120
	I0717 18:27:10.737135  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 84/120
	I0717 18:27:11.738560  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 85/120
	I0717 18:27:12.739823  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 86/120
	I0717 18:27:13.740995  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 87/120
	I0717 18:27:14.742832  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 88/120
	I0717 18:27:15.744316  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 89/120
	I0717 18:27:16.746272  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 90/120
	I0717 18:27:17.748276  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 91/120
	I0717 18:27:18.749521  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 92/120
	I0717 18:27:19.750795  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 93/120
	I0717 18:27:20.752234  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 94/120
	I0717 18:27:21.754010  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 95/120
	I0717 18:27:22.755962  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 96/120
	I0717 18:27:23.757207  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 97/120
	I0717 18:27:24.758911  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 98/120
	I0717 18:27:25.760177  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 99/120
	I0717 18:27:26.762285  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 100/120
	I0717 18:27:27.763660  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 101/120
	I0717 18:27:28.764917  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 102/120
	I0717 18:27:29.766922  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 103/120
	I0717 18:27:30.768687  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 104/120
	I0717 18:27:31.770512  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 105/120
	I0717 18:27:32.771836  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 106/120
	I0717 18:27:33.773350  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 107/120
	I0717 18:27:34.775296  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 108/120
	I0717 18:27:35.776954  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 109/120
	I0717 18:27:36.779011  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 110/120
	I0717 18:27:37.780651  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 111/120
	I0717 18:27:38.782192  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 112/120
	I0717 18:27:39.783387  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 113/120
	I0717 18:27:40.784787  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 114/120
	I0717 18:27:41.786318  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 115/120
	I0717 18:27:42.787955  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 116/120
	I0717 18:27:43.789190  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 117/120
	I0717 18:27:44.791118  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 118/120
	I0717 18:27:45.792327  415699 main.go:141] libmachine: (ha-445282-m02) Waiting for machine to stop 119/120
	I0717 18:27:46.792864  415699 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:27:46.793061  415699 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-445282 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
E0717 18:27:49.796104  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (19.087088462s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:27:46.841655  416131 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:27:46.841913  416131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:27:46.841921  416131 out.go:304] Setting ErrFile to fd 2...
	I0717 18:27:46.841925  416131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:27:46.842106  416131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:27:46.842282  416131 out.go:298] Setting JSON to false
	I0717 18:27:46.842315  416131 mustload.go:65] Loading cluster: ha-445282
	I0717 18:27:46.842429  416131 notify.go:220] Checking for updates...
	I0717 18:27:46.842660  416131 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:27:46.842676  416131 status.go:255] checking status of ha-445282 ...
	I0717 18:27:46.843072  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:46.843132  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:46.861770  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0717 18:27:46.862239  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:46.862971  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:46.863010  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:46.863369  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:46.863578  416131 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:27:46.865313  416131 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:27:46.865337  416131 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:27:46.865631  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:46.865686  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:46.881000  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0717 18:27:46.881422  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:46.881882  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:46.881906  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:46.882220  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:46.882397  416131 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:27:46.884829  416131 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:27:46.885214  416131 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:27:46.885236  416131 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:27:46.885397  416131 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:27:46.885755  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:46.885808  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:46.900130  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0717 18:27:46.900554  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:46.901033  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:46.901058  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:46.901346  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:46.901508  416131 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:27:46.901722  416131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:27:46.901755  416131 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:27:46.904427  416131 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:27:46.904817  416131 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:27:46.904837  416131 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:27:46.904995  416131 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:27:46.905170  416131 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:27:46.905302  416131 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:27:46.905440  416131 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:27:46.989801  416131 ssh_runner.go:195] Run: systemctl --version
	I0717 18:27:46.996446  416131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:27:47.013884  416131 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:27:47.013913  416131 api_server.go:166] Checking apiserver status ...
	I0717 18:27:47.013943  416131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:27:47.029859  416131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:27:47.039187  416131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:27:47.039231  416131 ssh_runner.go:195] Run: ls
	I0717 18:27:47.043448  416131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:27:47.049816  416131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:27:47.049839  416131 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:27:47.049852  416131 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:27:47.049867  416131 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:27:47.050244  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:47.050289  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:47.065394  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0717 18:27:47.065882  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:47.066379  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:47.066400  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:47.066769  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:47.066948  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:27:47.068545  416131 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:27:47.068611  416131 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:27:47.068899  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:47.068958  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:47.083586  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0717 18:27:47.083983  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:47.084451  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:47.084474  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:47.084801  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:47.085001  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:27:47.087475  416131 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:27:47.087951  416131 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:27:47.087977  416131 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:27:47.088128  416131 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:27:47.088468  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:27:47.088540  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:27:47.104007  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I0717 18:27:47.104459  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:27:47.104977  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:27:47.105002  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:27:47.105354  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:27:47.105559  416131 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:27:47.105773  416131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:27:47.105800  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:27:47.108733  416131 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:27:47.109185  416131 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:27:47.109206  416131 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:27:47.109347  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:27:47.109513  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:27:47.109697  416131 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:27:47.109860  416131 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:05.500678  416131 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:05.500779  416131 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:05.500811  416131 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:05.500819  416131 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:05.500842  416131 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:05.500854  416131 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:05.501219  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.501274  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.516404  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
	I0717 18:28:05.516980  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.517580  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.517610  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.517981  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.518225  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:05.519958  416131 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:05.519978  416131 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:05.520531  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.520598  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.536314  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41071
	I0717 18:28:05.536970  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.537624  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.537658  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.538029  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.538255  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:05.541873  416131 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:05.542337  416131 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:05.542367  416131 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:05.542554  416131 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:05.542874  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.542917  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.558454  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0717 18:28:05.558927  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.559417  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.559441  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.559778  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.559981  416131 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:05.560202  416131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:05.560224  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:05.563266  416131 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:05.563706  416131 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:05.563734  416131 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:05.563908  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:05.564071  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:05.564237  416131 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:05.564388  416131 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:05.653775  416131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:05.671187  416131 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:05.671222  416131 api_server.go:166] Checking apiserver status ...
	I0717 18:28:05.671256  416131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:05.688062  416131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:05.700056  416131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:05.700114  416131 ssh_runner.go:195] Run: ls
	I0717 18:28:05.705849  416131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:05.710974  416131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:05.710999  416131 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:05.711008  416131 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:05.711027  416131 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:05.711356  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.711402  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.726961  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0717 18:28:05.727530  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.728103  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.728125  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.728440  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.728695  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:05.730173  416131 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:05.730191  416131 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:05.730598  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.730643  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.746470  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I0717 18:28:05.746912  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.747452  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.747477  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.747871  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.748087  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:05.750807  416131 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:05.751252  416131 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:05.751282  416131 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:05.751422  416131 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:05.751855  416131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:05.751910  416131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:05.768019  416131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46555
	I0717 18:28:05.768499  416131 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:05.769043  416131 main.go:141] libmachine: Using API Version  1
	I0717 18:28:05.769067  416131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:05.769390  416131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:05.769587  416131 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:05.769807  416131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:05.769829  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:05.772691  416131 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:05.773154  416131 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:05.773177  416131 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:05.773356  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:05.773536  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:05.773668  416131 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:05.773801  416131 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:05.861560  416131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:05.879303  416131 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-445282 -n ha-445282
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-445282 logs -n 25: (1.546265114s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m03_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m04 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp testdata/cp-test.txt                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m04_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03:/home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m03 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-445282 node stop m02 -v=7                                                     | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:20:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:20:57.436165  411620 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:20:57.436283  411620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:57.436291  411620 out.go:304] Setting ErrFile to fd 2...
	I0717 18:20:57.436295  411620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:57.436465  411620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:20:57.437064  411620 out.go:298] Setting JSON to false
	I0717 18:20:57.437983  411620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7400,"bootTime":1721233057,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:20:57.438039  411620 start.go:139] virtualization: kvm guest
	I0717 18:20:57.440089  411620 out.go:177] * [ha-445282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:20:57.441619  411620 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:20:57.441693  411620 notify.go:220] Checking for updates...
	I0717 18:20:57.444079  411620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:20:57.445236  411620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:20:57.446353  411620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:57.447579  411620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:20:57.448901  411620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:20:57.450255  411620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:20:57.483939  411620 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:20:57.485210  411620 start.go:297] selected driver: kvm2
	I0717 18:20:57.485228  411620 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:20:57.485240  411620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:20:57.485865  411620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:20:57.485961  411620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:20:57.500703  411620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:20:57.500759  411620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:20:57.501060  411620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:20:57.501137  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:20:57.501149  411620 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 18:20:57.501157  411620 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 18:20:57.501223  411620 start.go:340] cluster config:
	{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 18:20:57.501315  411620 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:20:57.503126  411620 out.go:177] * Starting "ha-445282" primary control-plane node in "ha-445282" cluster
	I0717 18:20:57.504244  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:20:57.504283  411620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:20:57.504293  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:20:57.504386  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:20:57.504412  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:20:57.504751  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:20:57.504776  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json: {Name:mk3c4fde3e4f65735bd71ffe5ec31a71e72453f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:20:57.504962  411620 start.go:360] acquireMachinesLock for ha-445282: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:20:57.505003  411620 start.go:364] duration metric: took 20.55µs to acquireMachinesLock for "ha-445282"
	I0717 18:20:57.505026  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:20:57.505087  411620 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:20:57.506715  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:20:57.506867  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:57.506916  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:57.522000  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0717 18:20:57.522438  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:57.523017  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:20:57.523038  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:57.523362  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:57.523549  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:20:57.523707  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:20:57.523861  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:20:57.523892  411620 client.go:168] LocalClient.Create starting
	I0717 18:20:57.523931  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:20:57.523983  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:20:57.523997  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:20:57.524050  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:20:57.524068  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:20:57.524081  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:20:57.524097  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:20:57.524115  411620 main.go:141] libmachine: (ha-445282) Calling .PreCreateCheck
	I0717 18:20:57.524459  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:20:57.524871  411620 main.go:141] libmachine: Creating machine...
	I0717 18:20:57.524890  411620 main.go:141] libmachine: (ha-445282) Calling .Create
	I0717 18:20:57.524998  411620 main.go:141] libmachine: (ha-445282) Creating KVM machine...
	I0717 18:20:57.526540  411620 main.go:141] libmachine: (ha-445282) DBG | found existing default KVM network
	I0717 18:20:57.527290  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.527160  411643 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0717 18:20:57.527318  411620 main.go:141] libmachine: (ha-445282) DBG | created network xml: 
	I0717 18:20:57.527330  411620 main.go:141] libmachine: (ha-445282) DBG | <network>
	I0717 18:20:57.527343  411620 main.go:141] libmachine: (ha-445282) DBG |   <name>mk-ha-445282</name>
	I0717 18:20:57.527368  411620 main.go:141] libmachine: (ha-445282) DBG |   <dns enable='no'/>
	I0717 18:20:57.527381  411620 main.go:141] libmachine: (ha-445282) DBG |   
	I0717 18:20:57.527387  411620 main.go:141] libmachine: (ha-445282) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 18:20:57.527392  411620 main.go:141] libmachine: (ha-445282) DBG |     <dhcp>
	I0717 18:20:57.527397  411620 main.go:141] libmachine: (ha-445282) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 18:20:57.527412  411620 main.go:141] libmachine: (ha-445282) DBG |     </dhcp>
	I0717 18:20:57.527416  411620 main.go:141] libmachine: (ha-445282) DBG |   </ip>
	I0717 18:20:57.527421  411620 main.go:141] libmachine: (ha-445282) DBG |   
	I0717 18:20:57.527428  411620 main.go:141] libmachine: (ha-445282) DBG | </network>
	I0717 18:20:57.527433  411620 main.go:141] libmachine: (ha-445282) DBG | 
	I0717 18:20:57.532943  411620 main.go:141] libmachine: (ha-445282) DBG | trying to create private KVM network mk-ha-445282 192.168.39.0/24...
	I0717 18:20:57.598596  411620 main.go:141] libmachine: (ha-445282) DBG | private KVM network mk-ha-445282 192.168.39.0/24 created
	I0717 18:20:57.598652  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.598558  411643 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:57.598666  411620 main.go:141] libmachine: (ha-445282) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 ...
	I0717 18:20:57.598689  411620 main.go:141] libmachine: (ha-445282) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:20:57.598749  411620 main.go:141] libmachine: (ha-445282) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:20:57.861831  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.861709  411643 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa...
	I0717 18:20:58.033735  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:58.033596  411643 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/ha-445282.rawdisk...
	I0717 18:20:58.033779  411620 main.go:141] libmachine: (ha-445282) DBG | Writing magic tar header
	I0717 18:20:58.033790  411620 main.go:141] libmachine: (ha-445282) DBG | Writing SSH key tar header
	I0717 18:20:58.033798  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:58.033716  411643 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 ...
	I0717 18:20:58.033888  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282
	I0717 18:20:58.033909  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:20:58.033935  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 (perms=drwx------)
	I0717 18:20:58.033950  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:20:58.033965  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:20:58.033981  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:20:58.034001  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:58.034014  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:20:58.034027  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:20:58.034036  411620 main.go:141] libmachine: (ha-445282) Creating domain...
	I0717 18:20:58.034051  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:20:58.034064  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:20:58.034081  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:20:58.034093  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home
	I0717 18:20:58.034101  411620 main.go:141] libmachine: (ha-445282) DBG | Skipping /home - not owner
	I0717 18:20:58.035132  411620 main.go:141] libmachine: (ha-445282) define libvirt domain using xml: 
	I0717 18:20:58.035152  411620 main.go:141] libmachine: (ha-445282) <domain type='kvm'>
	I0717 18:20:58.035159  411620 main.go:141] libmachine: (ha-445282)   <name>ha-445282</name>
	I0717 18:20:58.035164  411620 main.go:141] libmachine: (ha-445282)   <memory unit='MiB'>2200</memory>
	I0717 18:20:58.035208  411620 main.go:141] libmachine: (ha-445282)   <vcpu>2</vcpu>
	I0717 18:20:58.035237  411620 main.go:141] libmachine: (ha-445282)   <features>
	I0717 18:20:58.035267  411620 main.go:141] libmachine: (ha-445282)     <acpi/>
	I0717 18:20:58.035287  411620 main.go:141] libmachine: (ha-445282)     <apic/>
	I0717 18:20:58.035297  411620 main.go:141] libmachine: (ha-445282)     <pae/>
	I0717 18:20:58.035318  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035331  411620 main.go:141] libmachine: (ha-445282)   </features>
	I0717 18:20:58.035343  411620 main.go:141] libmachine: (ha-445282)   <cpu mode='host-passthrough'>
	I0717 18:20:58.035354  411620 main.go:141] libmachine: (ha-445282)   
	I0717 18:20:58.035369  411620 main.go:141] libmachine: (ha-445282)   </cpu>
	I0717 18:20:58.035380  411620 main.go:141] libmachine: (ha-445282)   <os>
	I0717 18:20:58.035390  411620 main.go:141] libmachine: (ha-445282)     <type>hvm</type>
	I0717 18:20:58.035400  411620 main.go:141] libmachine: (ha-445282)     <boot dev='cdrom'/>
	I0717 18:20:58.035411  411620 main.go:141] libmachine: (ha-445282)     <boot dev='hd'/>
	I0717 18:20:58.035421  411620 main.go:141] libmachine: (ha-445282)     <bootmenu enable='no'/>
	I0717 18:20:58.035430  411620 main.go:141] libmachine: (ha-445282)   </os>
	I0717 18:20:58.035441  411620 main.go:141] libmachine: (ha-445282)   <devices>
	I0717 18:20:58.035454  411620 main.go:141] libmachine: (ha-445282)     <disk type='file' device='cdrom'>
	I0717 18:20:58.035463  411620 main.go:141] libmachine: (ha-445282)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/boot2docker.iso'/>
	I0717 18:20:58.035471  411620 main.go:141] libmachine: (ha-445282)       <target dev='hdc' bus='scsi'/>
	I0717 18:20:58.035495  411620 main.go:141] libmachine: (ha-445282)       <readonly/>
	I0717 18:20:58.035511  411620 main.go:141] libmachine: (ha-445282)     </disk>
	I0717 18:20:58.035538  411620 main.go:141] libmachine: (ha-445282)     <disk type='file' device='disk'>
	I0717 18:20:58.035556  411620 main.go:141] libmachine: (ha-445282)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:20:58.035570  411620 main.go:141] libmachine: (ha-445282)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/ha-445282.rawdisk'/>
	I0717 18:20:58.035581  411620 main.go:141] libmachine: (ha-445282)       <target dev='hda' bus='virtio'/>
	I0717 18:20:58.035605  411620 main.go:141] libmachine: (ha-445282)     </disk>
	I0717 18:20:58.035617  411620 main.go:141] libmachine: (ha-445282)     <interface type='network'>
	I0717 18:20:58.035628  411620 main.go:141] libmachine: (ha-445282)       <source network='mk-ha-445282'/>
	I0717 18:20:58.035637  411620 main.go:141] libmachine: (ha-445282)       <model type='virtio'/>
	I0717 18:20:58.035660  411620 main.go:141] libmachine: (ha-445282)     </interface>
	I0717 18:20:58.035676  411620 main.go:141] libmachine: (ha-445282)     <interface type='network'>
	I0717 18:20:58.035697  411620 main.go:141] libmachine: (ha-445282)       <source network='default'/>
	I0717 18:20:58.035720  411620 main.go:141] libmachine: (ha-445282)       <model type='virtio'/>
	I0717 18:20:58.035734  411620 main.go:141] libmachine: (ha-445282)     </interface>
	I0717 18:20:58.035746  411620 main.go:141] libmachine: (ha-445282)     <serial type='pty'>
	I0717 18:20:58.035760  411620 main.go:141] libmachine: (ha-445282)       <target port='0'/>
	I0717 18:20:58.035772  411620 main.go:141] libmachine: (ha-445282)     </serial>
	I0717 18:20:58.035811  411620 main.go:141] libmachine: (ha-445282)     <console type='pty'>
	I0717 18:20:58.035829  411620 main.go:141] libmachine: (ha-445282)       <target type='serial' port='0'/>
	I0717 18:20:58.035845  411620 main.go:141] libmachine: (ha-445282)     </console>
	I0717 18:20:58.035858  411620 main.go:141] libmachine: (ha-445282)     <rng model='virtio'>
	I0717 18:20:58.035870  411620 main.go:141] libmachine: (ha-445282)       <backend model='random'>/dev/random</backend>
	I0717 18:20:58.035881  411620 main.go:141] libmachine: (ha-445282)     </rng>
	I0717 18:20:58.035891  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035899  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035916  411620 main.go:141] libmachine: (ha-445282)   </devices>
	I0717 18:20:58.035928  411620 main.go:141] libmachine: (ha-445282) </domain>
	I0717 18:20:58.035948  411620 main.go:141] libmachine: (ha-445282) 
	I0717 18:20:58.040261  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:b8:ed:24 in network default
	I0717 18:20:58.040842  411620 main.go:141] libmachine: (ha-445282) Ensuring networks are active...
	I0717 18:20:58.040869  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:58.041542  411620 main.go:141] libmachine: (ha-445282) Ensuring network default is active
	I0717 18:20:58.041832  411620 main.go:141] libmachine: (ha-445282) Ensuring network mk-ha-445282 is active
	I0717 18:20:58.042308  411620 main.go:141] libmachine: (ha-445282) Getting domain xml...
	I0717 18:20:58.043039  411620 main.go:141] libmachine: (ha-445282) Creating domain...
	I0717 18:20:59.220885  411620 main.go:141] libmachine: (ha-445282) Waiting to get IP...
	I0717 18:20:59.221617  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.221956  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.222014  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.221959  411643 retry.go:31] will retry after 202.848571ms: waiting for machine to come up
	I0717 18:20:59.426397  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.426991  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.427014  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.426935  411643 retry.go:31] will retry after 305.888058ms: waiting for machine to come up
	I0717 18:20:59.734533  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.734978  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.735008  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.734919  411643 retry.go:31] will retry after 311.867851ms: waiting for machine to come up
	I0717 18:21:00.048631  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:00.049063  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:00.049084  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:00.049036  411643 retry.go:31] will retry after 590.611781ms: waiting for machine to come up
	I0717 18:21:00.640804  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:00.641354  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:00.641387  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:00.641305  411643 retry.go:31] will retry after 624.757031ms: waiting for machine to come up
	I0717 18:21:01.268174  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:01.268594  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:01.268619  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:01.268568  411643 retry.go:31] will retry after 602.906786ms: waiting for machine to come up
	I0717 18:21:01.873404  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:01.873843  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:01.873899  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:01.873797  411643 retry.go:31] will retry after 982.323542ms: waiting for machine to come up
	I0717 18:21:02.857484  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:02.857871  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:02.857905  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:02.857809  411643 retry.go:31] will retry after 1.327628548s: waiting for machine to come up
	I0717 18:21:04.187336  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:04.187719  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:04.187749  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:04.187671  411643 retry.go:31] will retry after 1.147670985s: waiting for machine to come up
	I0717 18:21:05.336932  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:05.337324  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:05.337356  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:05.337280  411643 retry.go:31] will retry after 1.65527994s: waiting for machine to come up
	I0717 18:21:06.993944  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:06.994349  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:06.994371  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:06.994320  411643 retry.go:31] will retry after 2.692639352s: waiting for machine to come up
	I0717 18:21:09.689766  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:09.690211  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:09.690244  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:09.690157  411643 retry.go:31] will retry after 3.508073211s: waiting for machine to come up
	I0717 18:21:13.199436  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:13.199915  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:13.199940  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:13.199876  411643 retry.go:31] will retry after 4.513256721s: waiting for machine to come up
	I0717 18:21:17.714267  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.714651  411620 main.go:141] libmachine: (ha-445282) Found IP for machine: 192.168.39.147
	I0717 18:21:17.714674  411620 main.go:141] libmachine: (ha-445282) Reserving static IP address...
	I0717 18:21:17.714687  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has current primary IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.715022  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find host DHCP lease matching {name: "ha-445282", mac: "52:54:00:1e:00:89", ip: "192.168.39.147"} in network mk-ha-445282
	I0717 18:21:17.785335  411620 main.go:141] libmachine: (ha-445282) DBG | Getting to WaitForSSH function...
	I0717 18:21:17.785369  411620 main.go:141] libmachine: (ha-445282) Reserved static IP address: 192.168.39.147
	I0717 18:21:17.785382  411620 main.go:141] libmachine: (ha-445282) Waiting for SSH to be available...
	I0717 18:21:17.788027  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.788426  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282
	I0717 18:21:17.788454  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find defined IP address of network mk-ha-445282 interface with MAC address 52:54:00:1e:00:89
	I0717 18:21:17.788641  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH client type: external
	I0717 18:21:17.788665  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa (-rw-------)
	I0717 18:21:17.788701  411620 main.go:141] libmachine: (ha-445282) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:21:17.788718  411620 main.go:141] libmachine: (ha-445282) DBG | About to run SSH command:
	I0717 18:21:17.788731  411620 main.go:141] libmachine: (ha-445282) DBG | exit 0
	I0717 18:21:17.792256  411620 main.go:141] libmachine: (ha-445282) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:21:17.792281  411620 main.go:141] libmachine: (ha-445282) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:21:17.792291  411620 main.go:141] libmachine: (ha-445282) DBG | command : exit 0
	I0717 18:21:17.792297  411620 main.go:141] libmachine: (ha-445282) DBG | err     : exit status 255
	I0717 18:21:17.792307  411620 main.go:141] libmachine: (ha-445282) DBG | output  : 
	I0717 18:21:20.792509  411620 main.go:141] libmachine: (ha-445282) DBG | Getting to WaitForSSH function...
	I0717 18:21:20.794941  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.795337  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:20.795365  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.795500  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH client type: external
	I0717 18:21:20.795544  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa (-rw-------)
	I0717 18:21:20.795571  411620 main.go:141] libmachine: (ha-445282) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:21:20.795623  411620 main.go:141] libmachine: (ha-445282) DBG | About to run SSH command:
	I0717 18:21:20.795648  411620 main.go:141] libmachine: (ha-445282) DBG | exit 0
	I0717 18:21:20.920319  411620 main.go:141] libmachine: (ha-445282) DBG | SSH cmd err, output: <nil>: 
	I0717 18:21:20.920551  411620 main.go:141] libmachine: (ha-445282) KVM machine creation complete!
	I0717 18:21:20.920977  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:21:20.921496  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:20.921689  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:20.921921  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:21:20.921935  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:20.923205  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:21:20.923219  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:21:20.923224  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:21:20.923230  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:20.925849  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.926217  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:20.926241  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.926394  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:20.926578  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:20.926752  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:20.926884  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:20.927072  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:20.927266  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:20.927278  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:21:21.027676  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:21:21.027706  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:21:21.027715  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.030452  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.030749  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.030781  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.030969  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.031148  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.031277  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.031362  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.031490  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.031677  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.031692  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:21:21.137271  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:21:21.137364  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:21:21.137371  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:21:21.137381  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.137638  411620 buildroot.go:166] provisioning hostname "ha-445282"
	I0717 18:21:21.137672  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.137879  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.140437  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.140826  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.140850  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.140999  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.141215  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.141377  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.141501  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.141700  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.141925  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.141942  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282 && echo "ha-445282" | sudo tee /etc/hostname
	I0717 18:21:21.262858  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:21:21.262889  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.265383  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.265781  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.265812  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.265956  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.266165  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.266344  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.266509  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.266698  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.266900  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.266922  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:21:21.377228  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:21:21.377262  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:21:21.377297  411620 buildroot.go:174] setting up certificates
	I0717 18:21:21.377310  411620 provision.go:84] configureAuth start
	I0717 18:21:21.377328  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.377673  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:21.380125  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.380419  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.380499  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.380561  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.382442  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.382734  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.382764  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.382864  411620 provision.go:143] copyHostCerts
	I0717 18:21:21.382908  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:21:21.382946  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:21:21.382959  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:21:21.383040  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:21:21.383149  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:21:21.383177  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:21:21.383184  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:21:21.383224  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:21:21.383288  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:21:21.383315  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:21:21.383324  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:21:21.383356  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:21:21.383460  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282 san=[127.0.0.1 192.168.39.147 ha-445282 localhost minikube]
	I0717 18:21:21.666961  411620 provision.go:177] copyRemoteCerts
	I0717 18:21:21.667030  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:21:21.667061  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.669819  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.670087  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.670117  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.670229  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.670442  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.670567  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.670665  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:21.750463  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:21:21.750539  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:21:21.791481  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:21:21.791553  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 18:21:21.814568  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:21:21.814639  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:21:21.836914  411620 provision.go:87] duration metric: took 459.58856ms to configureAuth
	I0717 18:21:21.836947  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:21:21.837116  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:21.837209  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.839775  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.840118  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.840148  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.840324  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.840645  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.840801  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.840968  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.841171  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.841345  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.841362  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:21:22.095838  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:21:22.095869  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:21:22.095877  411620 main.go:141] libmachine: (ha-445282) Calling .GetURL
	I0717 18:21:22.097344  411620 main.go:141] libmachine: (ha-445282) DBG | Using libvirt version 6000000
	I0717 18:21:22.099561  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.099938  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.099955  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.100220  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:21:22.100231  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:21:22.100240  411620 client.go:171] duration metric: took 24.576338191s to LocalClient.Create
	I0717 18:21:22.100265  411620 start.go:167] duration metric: took 24.576406812s to libmachine.API.Create "ha-445282"
	I0717 18:21:22.100275  411620 start.go:293] postStartSetup for "ha-445282" (driver="kvm2")
	I0717 18:21:22.100285  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:21:22.100303  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.100596  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:21:22.100649  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.102940  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.103269  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.103300  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.103402  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.103602  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.103793  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.103946  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.186915  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:21:22.191005  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:21:22.191036  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:21:22.191108  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:21:22.191183  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:21:22.191193  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:21:22.191282  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:21:22.200340  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:21:22.223193  411620 start.go:296] duration metric: took 122.904606ms for postStartSetup
	I0717 18:21:22.223247  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:21:22.223792  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:22.226406  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.226771  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.226811  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.227024  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:22.227223  411620 start.go:128] duration metric: took 24.722125053s to createHost
	I0717 18:21:22.227248  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.229579  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.229940  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.229976  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.230092  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.230312  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.230451  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.230635  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.230783  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:22.230967  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:22.230980  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:21:22.332986  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240482.301507503
	
	I0717 18:21:22.333009  411620 fix.go:216] guest clock: 1721240482.301507503
	I0717 18:21:22.333016  411620 fix.go:229] Guest: 2024-07-17 18:21:22.301507503 +0000 UTC Remote: 2024-07-17 18:21:22.227234993 +0000 UTC m=+24.826912968 (delta=74.27251ms)
	I0717 18:21:22.333036  411620 fix.go:200] guest clock delta is within tolerance: 74.27251ms
	I0717 18:21:22.333041  411620 start.go:83] releasing machines lock for "ha-445282", held for 24.828027677s
	I0717 18:21:22.333060  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.333328  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:22.335990  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.336328  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.336359  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.336526  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337023  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337180  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337256  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:21:22.337314  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.337395  411620 ssh_runner.go:195] Run: cat /version.json
	I0717 18:21:22.337417  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.339892  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340114  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340199  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.340224  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340351  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.340472  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.340499  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.340519  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340629  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.340697  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.340779  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.340860  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.340974  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.341105  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.417364  411620 ssh_runner.go:195] Run: systemctl --version
	I0717 18:21:22.439526  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:21:22.594413  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:21:22.600293  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:21:22.600368  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:21:22.617016  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:21:22.617034  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:21:22.617090  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:21:22.635011  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:21:22.649175  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:21:22.649231  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:21:22.663527  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:21:22.677441  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:21:22.785761  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:21:22.943623  411620 docker.go:233] disabling docker service ...
	I0717 18:21:22.943707  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:21:22.958320  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:21:22.971036  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:21:23.098713  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:21:23.217720  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:21:23.231673  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:21:23.249150  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:21:23.249232  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.259442  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:21:23.259510  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.270000  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.280540  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.290803  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:21:23.301859  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.312222  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.328710  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.338990  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:21:23.348295  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:21:23.348340  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:21:23.361109  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:21:23.370165  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:21:23.490594  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:21:23.620993  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:21:23.621061  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:21:23.625679  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:21:23.625725  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:21:23.629274  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:21:23.672452  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:21:23.672588  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:21:23.700008  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:21:23.728007  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:21:23.729368  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:23.732093  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:23.732516  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:23.732545  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:23.732767  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:21:23.736607  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:21:23.749270  411620 kubeadm.go:883] updating cluster {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:21:23.749370  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:21:23.749412  411620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:21:23.780984  411620 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:21:23.781048  411620 ssh_runner.go:195] Run: which lz4
	I0717 18:21:23.784708  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 18:21:23.784783  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:21:23.788825  411620 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:21:23.788850  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:21:25.116307  411620 crio.go:462] duration metric: took 1.331540851s to copy over tarball
	I0717 18:21:25.116375  411620 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:21:27.178175  411620 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061761749s)
	I0717 18:21:27.178212  411620 crio.go:469] duration metric: took 2.061875001s to extract the tarball
	I0717 18:21:27.178223  411620 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:21:27.214992  411620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:21:27.256698  411620 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:21:27.256720  411620 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:21:27.256729  411620 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.2 crio true true} ...
	I0717 18:21:27.256851  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:21:27.256921  411620 ssh_runner.go:195] Run: crio config
	I0717 18:21:27.304125  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:21:27.304149  411620 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 18:21:27.304167  411620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:21:27.304190  411620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-445282 NodeName:ha-445282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:21:27.304315  411620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-445282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:21:27.304339  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:21:27.304382  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:21:27.322496  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:21:27.322634  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:21:27.322721  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:21:27.332415  411620 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:21:27.332494  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 18:21:27.341790  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 18:21:27.358131  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:21:27.375089  411620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 18:21:27.391904  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 18:21:27.408237  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:21:27.412061  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:21:27.423919  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:21:27.535667  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:21:27.553931  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.147
	I0717 18:21:27.553956  411620 certs.go:194] generating shared ca certs ...
	I0717 18:21:27.553990  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.554163  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:21:27.554203  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:21:27.554215  411620 certs.go:256] generating profile certs ...
	I0717 18:21:27.554275  411620 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:21:27.554289  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt with IP's: []
	I0717 18:21:27.887939  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt ...
	I0717 18:21:27.887977  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt: {Name:mk848572ed450a3c0e854a18c6d204c6a1ba57ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.888171  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key ...
	I0717 18:21:27.888183  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key: {Name:mk7325569a4e28ec58a5018d73ce806286c4b119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.888268  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3
	I0717 18:21:27.888296  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.254]
	I0717 18:21:27.962908  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 ...
	I0717 18:21:27.962942  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3: {Name:mkb0a3a35931d3a052f3a164e025c02dd7779027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.963108  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3 ...
	I0717 18:21:27.963120  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3: {Name:mk654e9c64fa1f1fd4c12efd7fb99ccb75cfcd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.963196  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:21:27.963288  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:21:27.963347  411620 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:21:27.963361  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt with IP's: []
	I0717 18:21:28.111633  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt ...
	I0717 18:21:28.111665  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt: {Name:mkac7c5f45728ceef72617ed8d12521e601336b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:28.111822  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key ...
	I0717 18:21:28.111832  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key: {Name:mkf06ba1e36571cdd5d188ac594df13edd4b234f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:28.111904  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:21:28.111920  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:21:28.111932  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:21:28.111944  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:21:28.111968  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:21:28.111980  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:21:28.111993  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:21:28.112004  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:21:28.112060  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:21:28.112097  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:21:28.112107  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:21:28.112125  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:21:28.112154  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:21:28.112175  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:21:28.112212  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:21:28.112244  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.112257  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.112266  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.112946  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:21:28.139960  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:21:28.171326  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:21:28.200748  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:21:28.223571  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:21:28.246441  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:21:28.269964  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:21:28.296771  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:21:28.331660  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:21:28.359733  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:21:28.382617  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:21:28.409625  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:21:28.425228  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:21:28.430978  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:21:28.441324  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.445653  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.445703  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.451339  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:21:28.461307  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:21:28.471308  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.475698  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.475750  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.481428  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:21:28.491354  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:21:28.501324  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.505653  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.505702  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.511073  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:21:28.521506  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:21:28.525658  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:21:28.525708  411620 kubeadm.go:392] StartCluster: {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:21:28.525783  411620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:21:28.525839  411620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:21:28.561156  411620 cri.go:89] found id: ""
	I0717 18:21:28.561239  411620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:21:28.570672  411620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:21:28.582211  411620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:21:28.592945  411620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:21:28.592963  411620 kubeadm.go:157] found existing configuration files:
	
	I0717 18:21:28.593005  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:21:28.601621  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:21:28.601667  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:21:28.611729  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:21:28.622365  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:21:28.622430  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:21:28.632965  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:21:28.642575  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:21:28.642830  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:21:28.652364  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:21:28.661729  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:21:28.661771  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:21:28.670865  411620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:21:28.789289  411620 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:21:28.789415  411620 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:21:28.908696  411620 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:21:28.908844  411620 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:21:28.908961  411620 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:21:29.111539  411620 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:21:29.171105  411620 out.go:204]   - Generating certificates and keys ...
	I0717 18:21:29.171275  411620 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:21:29.171391  411620 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:21:29.485093  411620 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:21:29.546994  411620 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:21:29.614542  411620 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:21:29.991363  411620 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:21:30.142705  411620 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:21:30.143042  411620 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-445282 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0717 18:21:30.497273  411620 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:21:30.497682  411620 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-445282 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0717 18:21:30.632333  411620 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:21:30.891004  411620 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:21:31.045778  411620 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:21:31.046110  411620 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:21:31.361884  411620 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:21:31.426103  411620 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:21:31.532864  411620 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:21:31.839968  411620 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:21:32.206433  411620 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:21:32.207041  411620 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:21:32.211248  411620 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:21:32.213444  411620 out.go:204]   - Booting up control plane ...
	I0717 18:21:32.213559  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:21:32.213654  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:21:32.213746  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:21:32.227836  411620 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:21:32.228712  411620 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:21:32.228789  411620 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:21:32.352273  411620 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:21:32.352383  411620 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:21:32.853603  411620 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.726884ms
	I0717 18:21:32.853731  411620 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:21:38.851106  411620 kubeadm.go:310] [api-check] The API server is healthy after 6.000784359s
	I0717 18:21:38.867651  411620 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:21:38.881859  411620 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:21:38.910079  411620 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:21:38.910244  411620 kubeadm.go:310] [mark-control-plane] Marking the node ha-445282 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:21:38.928584  411620 kubeadm.go:310] [bootstrap-token] Using token: 1d2hng.iymafv4x15o5r3g5
	I0717 18:21:38.930137  411620 out.go:204]   - Configuring RBAC rules ...
	I0717 18:21:38.930242  411620 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:21:38.937081  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:21:38.949768  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:21:38.952623  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:21:38.955474  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:21:38.958885  411620 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:21:39.259682  411620 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:21:39.683847  411620 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:21:40.257777  411620 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:21:40.258689  411620 kubeadm.go:310] 
	I0717 18:21:40.258756  411620 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:21:40.258782  411620 kubeadm.go:310] 
	I0717 18:21:40.258882  411620 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:21:40.258891  411620 kubeadm.go:310] 
	I0717 18:21:40.258925  411620 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:21:40.258999  411620 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:21:40.259073  411620 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:21:40.259088  411620 kubeadm.go:310] 
	I0717 18:21:40.259169  411620 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:21:40.259185  411620 kubeadm.go:310] 
	I0717 18:21:40.259224  411620 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:21:40.259230  411620 kubeadm.go:310] 
	I0717 18:21:40.259295  411620 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:21:40.259403  411620 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:21:40.259506  411620 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:21:40.259519  411620 kubeadm.go:310] 
	I0717 18:21:40.259653  411620 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:21:40.259758  411620 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:21:40.259769  411620 kubeadm.go:310] 
	I0717 18:21:40.259872  411620 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1d2hng.iymafv4x15o5r3g5 \
	I0717 18:21:40.259999  411620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 18:21:40.260043  411620 kubeadm.go:310] 	--control-plane 
	I0717 18:21:40.260055  411620 kubeadm.go:310] 
	I0717 18:21:40.260172  411620 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:21:40.260180  411620 kubeadm.go:310] 
	I0717 18:21:40.260277  411620 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1d2hng.iymafv4x15o5r3g5 \
	I0717 18:21:40.260425  411620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 18:21:40.260799  411620 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:21:40.260848  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:21:40.260860  411620 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 18:21:40.263475  411620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 18:21:40.264768  411620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 18:21:40.270397  411620 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 18:21:40.270415  411620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 18:21:40.288797  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 18:21:40.636457  411620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:21:40.636556  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:40.636556  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282 minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=true
	I0717 18:21:40.831014  411620 ops.go:34] apiserver oom_adj: -16
	I0717 18:21:40.834914  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:41.336024  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:41.835178  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:42.335924  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:42.835263  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:43.335949  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:43.835493  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:44.335826  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:44.836061  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:45.335816  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:45.835226  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:46.335188  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:46.835610  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:47.336038  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:47.835093  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:48.335826  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:48.835672  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:49.335488  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:49.835294  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:50.335063  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:50.835711  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:51.335151  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:51.835852  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.335270  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.835971  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.926916  411620 kubeadm.go:1113] duration metric: took 12.290444855s to wait for elevateKubeSystemPrivileges
	I0717 18:21:52.926959  411620 kubeadm.go:394] duration metric: took 24.401254511s to StartCluster
	I0717 18:21:52.926980  411620 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:52.927068  411620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:21:52.927944  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:52.928193  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:21:52.928210  411620 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:21:52.928275  411620 addons.go:69] Setting storage-provisioner=true in profile "ha-445282"
	I0717 18:21:52.928185  411620 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:21:52.928317  411620 addons.go:69] Setting default-storageclass=true in profile "ha-445282"
	I0717 18:21:52.928331  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:21:52.928345  411620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-445282"
	I0717 18:21:52.928309  411620 addons.go:234] Setting addon storage-provisioner=true in "ha-445282"
	I0717 18:21:52.928417  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:52.928443  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:21:52.928784  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.928788  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.928824  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.928845  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.944255  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0717 18:21:52.944403  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0717 18:21:52.944761  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.944892  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.945267  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.945290  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.945474  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.945502  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.945645  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.945815  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.945868  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.946472  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.946523  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.948278  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:21:52.948693  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 18:21:52.949334  411620 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 18:21:52.949608  411620 addons.go:234] Setting addon default-storageclass=true in "ha-445282"
	I0717 18:21:52.949663  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:21:52.950039  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.950091  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.962693  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0717 18:21:52.963272  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.963853  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.963880  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.964273  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.964468  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.965566  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0717 18:21:52.966185  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.966274  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:52.966730  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.966760  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.967216  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.967820  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.967872  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.968008  411620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:21:52.969183  411620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:21:52.969202  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:21:52.969222  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:52.972583  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.973016  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:52.973042  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.973211  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:52.973411  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:52.973612  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:52.973747  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:52.983571  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0717 18:21:52.983974  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.984512  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.984538  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.984856  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.985062  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.986449  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:52.986759  411620 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:21:52.986788  411620 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:21:52.986809  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:52.989775  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.990209  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:52.990229  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.990380  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:52.990549  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:52.990718  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:52.990872  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:53.047876  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:21:53.115126  411620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:21:53.132943  411620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:21:53.303660  411620 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 18:21:53.553631  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553649  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553661  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.553667  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.553969  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.553985  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.553987  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.553998  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553999  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.554009  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.554010  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.554063  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.554251  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.554276  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.555389  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.555410  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.555563  411620 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 18:21:53.555572  411620 round_trippers.go:469] Request Headers:
	I0717 18:21:53.555581  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:21:53.555587  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:21:53.564705  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:21:53.565391  411620 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 18:21:53.565408  411620 round_trippers.go:469] Request Headers:
	I0717 18:21:53.565419  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:21:53.565427  411620 round_trippers.go:473]     Content-Type: application/json
	I0717 18:21:53.565431  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:21:53.567364  411620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 18:21:53.567502  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.567516  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.567902  411620 main.go:141] libmachine: (ha-445282) DBG | Closing plugin on server side
	I0717 18:21:53.567918  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.567934  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.569566  411620 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 18:21:53.570700  411620 addons.go:510] duration metric: took 642.489279ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 18:21:53.570735  411620 start.go:246] waiting for cluster config update ...
	I0717 18:21:53.570751  411620 start.go:255] writing updated cluster config ...
	I0717 18:21:53.572198  411620 out.go:177] 
	I0717 18:21:53.573378  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:53.573467  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:53.574946  411620 out.go:177] * Starting "ha-445282-m02" control-plane node in "ha-445282" cluster
	I0717 18:21:53.575738  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:21:53.575763  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:21:53.575881  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:21:53.575897  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:21:53.575986  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:53.576144  411620 start.go:360] acquireMachinesLock for ha-445282-m02: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:21:53.576186  411620 start.go:364] duration metric: took 23.895µs to acquireMachinesLock for "ha-445282-m02"
	I0717 18:21:53.576202  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:21:53.576266  411620 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 18:21:53.578158  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:21:53.578229  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:53.578260  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:53.594869  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41589
	I0717 18:21:53.595312  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:53.595780  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:53.595801  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:53.596092  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:53.596278  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:21:53.596398  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:21:53.596509  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:21:53.596538  411620 client.go:168] LocalClient.Create starting
	I0717 18:21:53.596573  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:21:53.596613  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:21:53.596636  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:21:53.596705  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:21:53.596731  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:21:53.596745  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:21:53.596770  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:21:53.596782  411620 main.go:141] libmachine: (ha-445282-m02) Calling .PreCreateCheck
	I0717 18:21:53.596936  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:21:53.597276  411620 main.go:141] libmachine: Creating machine...
	I0717 18:21:53.597290  411620 main.go:141] libmachine: (ha-445282-m02) Calling .Create
	I0717 18:21:53.597387  411620 main.go:141] libmachine: (ha-445282-m02) Creating KVM machine...
	I0717 18:21:53.598461  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found existing default KVM network
	I0717 18:21:53.598584  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found existing private KVM network mk-ha-445282
	I0717 18:21:53.598712  411620 main.go:141] libmachine: (ha-445282-m02) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 ...
	I0717 18:21:53.598744  411620 main.go:141] libmachine: (ha-445282-m02) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:21:53.598792  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.598687  412011 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:21:53.598902  411620 main.go:141] libmachine: (ha-445282-m02) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:21:53.845657  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.845504  412011 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa...
	I0717 18:21:53.958434  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.958300  412011 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/ha-445282-m02.rawdisk...
	I0717 18:21:53.958468  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Writing magic tar header
	I0717 18:21:53.958479  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Writing SSH key tar header
	I0717 18:21:53.958487  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.958411  412011 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 ...
	I0717 18:21:53.958499  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02
	I0717 18:21:53.958527  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 (perms=drwx------)
	I0717 18:21:53.958553  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:21:53.958568  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:21:53.958579  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:21:53.958592  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:21:53.958602  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:21:53.958631  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:21:53.958644  411620 main.go:141] libmachine: (ha-445282-m02) Creating domain...
	I0717 18:21:53.958690  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:21:53.958716  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:21:53.958729  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:21:53.958741  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:21:53.958753  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home
	I0717 18:21:53.958763  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Skipping /home - not owner
	I0717 18:21:53.959673  411620 main.go:141] libmachine: (ha-445282-m02) define libvirt domain using xml: 
	I0717 18:21:53.959704  411620 main.go:141] libmachine: (ha-445282-m02) <domain type='kvm'>
	I0717 18:21:53.959711  411620 main.go:141] libmachine: (ha-445282-m02)   <name>ha-445282-m02</name>
	I0717 18:21:53.959718  411620 main.go:141] libmachine: (ha-445282-m02)   <memory unit='MiB'>2200</memory>
	I0717 18:21:53.959757  411620 main.go:141] libmachine: (ha-445282-m02)   <vcpu>2</vcpu>
	I0717 18:21:53.959781  411620 main.go:141] libmachine: (ha-445282-m02)   <features>
	I0717 18:21:53.959789  411620 main.go:141] libmachine: (ha-445282-m02)     <acpi/>
	I0717 18:21:53.959797  411620 main.go:141] libmachine: (ha-445282-m02)     <apic/>
	I0717 18:21:53.959803  411620 main.go:141] libmachine: (ha-445282-m02)     <pae/>
	I0717 18:21:53.959810  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.959818  411620 main.go:141] libmachine: (ha-445282-m02)   </features>
	I0717 18:21:53.959829  411620 main.go:141] libmachine: (ha-445282-m02)   <cpu mode='host-passthrough'>
	I0717 18:21:53.959838  411620 main.go:141] libmachine: (ha-445282-m02)   
	I0717 18:21:53.959850  411620 main.go:141] libmachine: (ha-445282-m02)   </cpu>
	I0717 18:21:53.959877  411620 main.go:141] libmachine: (ha-445282-m02)   <os>
	I0717 18:21:53.959897  411620 main.go:141] libmachine: (ha-445282-m02)     <type>hvm</type>
	I0717 18:21:53.959909  411620 main.go:141] libmachine: (ha-445282-m02)     <boot dev='cdrom'/>
	I0717 18:21:53.959923  411620 main.go:141] libmachine: (ha-445282-m02)     <boot dev='hd'/>
	I0717 18:21:53.959939  411620 main.go:141] libmachine: (ha-445282-m02)     <bootmenu enable='no'/>
	I0717 18:21:53.959954  411620 main.go:141] libmachine: (ha-445282-m02)   </os>
	I0717 18:21:53.959965  411620 main.go:141] libmachine: (ha-445282-m02)   <devices>
	I0717 18:21:53.959976  411620 main.go:141] libmachine: (ha-445282-m02)     <disk type='file' device='cdrom'>
	I0717 18:21:53.959992  411620 main.go:141] libmachine: (ha-445282-m02)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/boot2docker.iso'/>
	I0717 18:21:53.960001  411620 main.go:141] libmachine: (ha-445282-m02)       <target dev='hdc' bus='scsi'/>
	I0717 18:21:53.960007  411620 main.go:141] libmachine: (ha-445282-m02)       <readonly/>
	I0717 18:21:53.960013  411620 main.go:141] libmachine: (ha-445282-m02)     </disk>
	I0717 18:21:53.960019  411620 main.go:141] libmachine: (ha-445282-m02)     <disk type='file' device='disk'>
	I0717 18:21:53.960027  411620 main.go:141] libmachine: (ha-445282-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:21:53.960047  411620 main.go:141] libmachine: (ha-445282-m02)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/ha-445282-m02.rawdisk'/>
	I0717 18:21:53.960058  411620 main.go:141] libmachine: (ha-445282-m02)       <target dev='hda' bus='virtio'/>
	I0717 18:21:53.960066  411620 main.go:141] libmachine: (ha-445282-m02)     </disk>
	I0717 18:21:53.960077  411620 main.go:141] libmachine: (ha-445282-m02)     <interface type='network'>
	I0717 18:21:53.960090  411620 main.go:141] libmachine: (ha-445282-m02)       <source network='mk-ha-445282'/>
	I0717 18:21:53.960100  411620 main.go:141] libmachine: (ha-445282-m02)       <model type='virtio'/>
	I0717 18:21:53.960114  411620 main.go:141] libmachine: (ha-445282-m02)     </interface>
	I0717 18:21:53.960131  411620 main.go:141] libmachine: (ha-445282-m02)     <interface type='network'>
	I0717 18:21:53.960144  411620 main.go:141] libmachine: (ha-445282-m02)       <source network='default'/>
	I0717 18:21:53.960155  411620 main.go:141] libmachine: (ha-445282-m02)       <model type='virtio'/>
	I0717 18:21:53.960164  411620 main.go:141] libmachine: (ha-445282-m02)     </interface>
	I0717 18:21:53.960174  411620 main.go:141] libmachine: (ha-445282-m02)     <serial type='pty'>
	I0717 18:21:53.960183  411620 main.go:141] libmachine: (ha-445282-m02)       <target port='0'/>
	I0717 18:21:53.960192  411620 main.go:141] libmachine: (ha-445282-m02)     </serial>
	I0717 18:21:53.960204  411620 main.go:141] libmachine: (ha-445282-m02)     <console type='pty'>
	I0717 18:21:53.960221  411620 main.go:141] libmachine: (ha-445282-m02)       <target type='serial' port='0'/>
	I0717 18:21:53.960233  411620 main.go:141] libmachine: (ha-445282-m02)     </console>
	I0717 18:21:53.960243  411620 main.go:141] libmachine: (ha-445282-m02)     <rng model='virtio'>
	I0717 18:21:53.960253  411620 main.go:141] libmachine: (ha-445282-m02)       <backend model='random'>/dev/random</backend>
	I0717 18:21:53.960261  411620 main.go:141] libmachine: (ha-445282-m02)     </rng>
	I0717 18:21:53.960266  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.960270  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.960278  411620 main.go:141] libmachine: (ha-445282-m02)   </devices>
	I0717 18:21:53.960282  411620 main.go:141] libmachine: (ha-445282-m02) </domain>
	I0717 18:21:53.960301  411620 main.go:141] libmachine: (ha-445282-m02) 
	I0717 18:21:53.966620  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:c0:49:b8 in network default
	I0717 18:21:53.967184  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring networks are active...
	I0717 18:21:53.967211  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:53.967832  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring network default is active
	I0717 18:21:53.968125  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring network mk-ha-445282 is active
	I0717 18:21:53.968497  411620 main.go:141] libmachine: (ha-445282-m02) Getting domain xml...
	I0717 18:21:53.969087  411620 main.go:141] libmachine: (ha-445282-m02) Creating domain...
	I0717 18:21:55.188085  411620 main.go:141] libmachine: (ha-445282-m02) Waiting to get IP...
	I0717 18:21:55.188904  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.189294  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.189322  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.189259  412011 retry.go:31] will retry after 207.621374ms: waiting for machine to come up
	I0717 18:21:55.398896  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.399353  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.399382  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.399306  412011 retry.go:31] will retry after 297.6147ms: waiting for machine to come up
	I0717 18:21:55.698557  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.699049  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.699073  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.698992  412011 retry.go:31] will retry after 352.642718ms: waiting for machine to come up
	I0717 18:21:56.053750  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.054148  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.054180  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.054105  412011 retry.go:31] will retry after 449.896159ms: waiting for machine to come up
	I0717 18:21:56.505896  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.506320  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.506348  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.506272  412011 retry.go:31] will retry after 487.736707ms: waiting for machine to come up
	I0717 18:21:56.995968  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.996402  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.996435  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.996331  412011 retry.go:31] will retry after 890.067855ms: waiting for machine to come up
	I0717 18:21:57.888589  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:57.889015  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:57.889049  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:57.888952  412011 retry.go:31] will retry after 932.094508ms: waiting for machine to come up
	I0717 18:21:58.823844  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:58.824672  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:58.824704  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:58.824619  412011 retry.go:31] will retry after 1.360476703s: waiting for machine to come up
	I0717 18:22:00.187007  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:00.187403  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:00.187433  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:00.187349  412011 retry.go:31] will retry after 1.695987259s: waiting for machine to come up
	I0717 18:22:01.885130  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:01.885528  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:01.885557  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:01.885486  412011 retry.go:31] will retry after 2.149050919s: waiting for machine to come up
	I0717 18:22:04.035710  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:04.036117  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:04.036148  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:04.036064  412011 retry.go:31] will retry after 1.757259212s: waiting for machine to come up
	I0717 18:22:05.795253  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:05.795675  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:05.795705  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:05.795644  412011 retry.go:31] will retry after 2.675849294s: waiting for machine to come up
	I0717 18:22:08.474451  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:08.474792  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:08.474828  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:08.474736  412011 retry.go:31] will retry after 3.611039345s: waiting for machine to come up
	I0717 18:22:12.086972  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:12.087451  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:12.087476  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:12.087390  412011 retry.go:31] will retry after 5.26115106s: waiting for machine to come up
	I0717 18:22:17.349693  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.350199  411620 main.go:141] libmachine: (ha-445282-m02) Found IP for machine: 192.168.39.198
	I0717 18:22:17.350228  411620 main.go:141] libmachine: (ha-445282-m02) Reserving static IP address...
	I0717 18:22:17.350237  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.350593  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find host DHCP lease matching {name: "ha-445282-m02", mac: "52:54:00:a6:a9:c1", ip: "192.168.39.198"} in network mk-ha-445282
	I0717 18:22:17.426961  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Getting to WaitForSSH function...
	I0717 18:22:17.426997  411620 main.go:141] libmachine: (ha-445282-m02) Reserved static IP address: 192.168.39.198
	I0717 18:22:17.427009  411620 main.go:141] libmachine: (ha-445282-m02) Waiting for SSH to be available...
	I0717 18:22:17.430298  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.430735  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.430764  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.430907  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using SSH client type: external
	I0717 18:22:17.430935  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa (-rw-------)
	I0717 18:22:17.430970  411620 main.go:141] libmachine: (ha-445282-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:22:17.430985  411620 main.go:141] libmachine: (ha-445282-m02) DBG | About to run SSH command:
	I0717 18:22:17.430998  411620 main.go:141] libmachine: (ha-445282-m02) DBG | exit 0
	I0717 18:22:17.556606  411620 main.go:141] libmachine: (ha-445282-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 18:22:17.556877  411620 main.go:141] libmachine: (ha-445282-m02) KVM machine creation complete!
	I0717 18:22:17.557176  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:22:17.557714  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:17.557959  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:17.558130  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:22:17.558166  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:22:17.559776  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:22:17.559800  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:22:17.559808  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:22:17.559814  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.562429  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.562845  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.562884  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.563047  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.563247  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.563422  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.563546  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.563707  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.563994  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.564010  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:22:17.667823  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:22:17.667848  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:22:17.667856  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.671014  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.671389  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.671420  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.671571  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.671811  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.672000  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.672110  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.672287  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.672514  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.672530  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:22:17.781418  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:22:17.781532  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:22:17.781546  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:22:17.781555  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:17.781830  411620 buildroot.go:166] provisioning hostname "ha-445282-m02"
	I0717 18:22:17.781854  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:17.782076  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.784828  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.785192  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.785226  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.785374  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.785556  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.785732  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.785894  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.786034  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.786203  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.786215  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282-m02 && echo "ha-445282-m02" | sudo tee /etc/hostname
	I0717 18:22:17.902339  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282-m02
	
	I0717 18:22:17.902376  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.904945  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.905270  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.905298  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.905480  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.905686  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.905843  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.905973  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.906160  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.906402  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.906425  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:22:18.018118  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:22:18.018160  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:22:18.018182  411620 buildroot.go:174] setting up certificates
	I0717 18:22:18.018194  411620 provision.go:84] configureAuth start
	I0717 18:22:18.018204  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:18.018501  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.021271  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.021744  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.021782  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.021943  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.024598  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.024981  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.025018  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.025148  411620 provision.go:143] copyHostCerts
	I0717 18:22:18.025184  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:22:18.025229  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:22:18.025240  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:22:18.025308  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:22:18.025397  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:22:18.025414  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:22:18.025421  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:22:18.025443  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:22:18.025488  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:22:18.025503  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:22:18.025509  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:22:18.025528  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:22:18.025576  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282-m02 san=[127.0.0.1 192.168.39.198 ha-445282-m02 localhost minikube]
	I0717 18:22:18.116857  411620 provision.go:177] copyRemoteCerts
	I0717 18:22:18.116917  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:22:18.116942  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.119855  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.120188  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.120223  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.120428  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.120612  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.120797  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.120917  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.204519  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:22:18.204602  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:22:18.228303  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:22:18.228386  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:22:18.252956  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:22:18.253028  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:22:18.277615  411620 provision.go:87] duration metric: took 259.401212ms to configureAuth
	I0717 18:22:18.277650  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:22:18.277828  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:18.277902  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.280846  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.281294  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.281327  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.281567  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.281809  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.281997  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.282157  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.282395  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:18.282580  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:18.282595  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:22:18.572242  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:22:18.572270  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:22:18.572279  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetURL
	I0717 18:22:18.573653  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using libvirt version 6000000
	I0717 18:22:18.576062  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.576421  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.576447  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.576648  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:22:18.576667  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:22:18.576676  411620 client.go:171] duration metric: took 24.980126441s to LocalClient.Create
	I0717 18:22:18.576706  411620 start.go:167] duration metric: took 24.980198027s to libmachine.API.Create "ha-445282"
	I0717 18:22:18.576718  411620 start.go:293] postStartSetup for "ha-445282-m02" (driver="kvm2")
	I0717 18:22:18.576733  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:22:18.576758  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.577030  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:22:18.577055  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.579483  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.579821  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.579852  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.580059  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.580319  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.580465  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.580640  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.663436  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:22:18.667924  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:22:18.667949  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:22:18.668021  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:22:18.668112  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:22:18.668125  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:22:18.668231  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:22:18.678158  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:22:18.706673  411620 start.go:296] duration metric: took 129.933856ms for postStartSetup
	I0717 18:22:18.706734  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:22:18.707470  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.710115  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.710530  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.710555  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.710807  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:22:18.711016  411620 start.go:128] duration metric: took 25.13473763s to createHost
	I0717 18:22:18.711040  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.713449  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.713793  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.713819  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.714025  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.714208  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.714357  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.714489  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.714616  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:18.714806  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:18.714819  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:22:18.821413  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240538.800156557
	
	I0717 18:22:18.821446  411620 fix.go:216] guest clock: 1721240538.800156557
	I0717 18:22:18.821455  411620 fix.go:229] Guest: 2024-07-17 18:22:18.800156557 +0000 UTC Remote: 2024-07-17 18:22:18.711027236 +0000 UTC m=+81.310705212 (delta=89.129321ms)
	I0717 18:22:18.821477  411620 fix.go:200] guest clock delta is within tolerance: 89.129321ms
	I0717 18:22:18.821484  411620 start.go:83] releasing machines lock for "ha-445282-m02", held for 25.245288365s
	I0717 18:22:18.821509  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.821821  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.824555  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.824950  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.824978  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.827412  411620 out.go:177] * Found network options:
	I0717 18:22:18.828814  411620 out.go:177]   - NO_PROXY=192.168.39.147
	W0717 18:22:18.830089  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:22:18.830115  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830694  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830893  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830978  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:22:18.831022  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	W0717 18:22:18.831109  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:22:18.831206  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:22:18.831230  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.833539  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.833781  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.833909  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.833935  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.834027  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.834172  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.834182  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.834217  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.834325  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.834383  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.834502  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.834508  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.834656  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.834819  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:19.066807  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:22:19.073339  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:22:19.073398  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:22:19.090466  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:22:19.090499  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:22:19.090581  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:22:19.106914  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:22:19.121704  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:22:19.121767  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:22:19.136199  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:22:19.151333  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:22:19.278557  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:22:19.448040  411620 docker.go:233] disabling docker service ...
	I0717 18:22:19.448138  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:22:19.462987  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:22:19.475866  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:22:19.600005  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:22:19.731330  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:22:19.745362  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:22:19.763506  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:22:19.763647  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.773912  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:22:19.773988  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.784000  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.794123  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.804403  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:22:19.814801  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.824951  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.844093  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.856601  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:22:19.867850  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:22:19.867922  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:22:19.884094  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:22:19.895690  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:20.020399  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:22:20.158643  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:22:20.158733  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:22:20.163293  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:22:20.163344  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:22:20.166947  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:22:20.203400  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:22:20.203494  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:22:20.234832  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:22:20.264801  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:22:20.266229  411620 out.go:177]   - env NO_PROXY=192.168.39.147
	I0717 18:22:20.267600  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:20.270264  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:20.270624  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:20.270655  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:20.270878  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:22:20.275383  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:22:20.289213  411620 mustload.go:65] Loading cluster: ha-445282
	I0717 18:22:20.289486  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:20.289815  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:20.289854  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:20.305066  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0717 18:22:20.305592  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:20.306084  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:20.306107  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:20.306476  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:20.306661  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:22:20.308332  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:22:20.308720  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:20.308757  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:20.323693  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0717 18:22:20.324190  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:20.324723  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:20.324751  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:20.325057  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:20.325274  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:22:20.325471  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.198
	I0717 18:22:20.325486  411620 certs.go:194] generating shared ca certs ...
	I0717 18:22:20.325505  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.325667  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:22:20.325708  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:22:20.325718  411620 certs.go:256] generating profile certs ...
	I0717 18:22:20.325788  411620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:22:20.325812  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4
	I0717 18:22:20.325827  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.254]
	I0717 18:22:20.482321  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 ...
	I0717 18:22:20.482352  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4: {Name:mk99f343f9591038fc52d5d3eb699d6c2e430eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.482519  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4 ...
	I0717 18:22:20.482533  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4: {Name:mkcee6298db383444a1d2160d83549ebfb92dfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.482600  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:22:20.482729  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:22:20.482856  411620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:22:20.482873  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:22:20.482885  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:22:20.482898  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:22:20.482910  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:22:20.482921  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:22:20.482931  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:22:20.482940  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:22:20.482949  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:22:20.482997  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:22:20.483024  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:22:20.483034  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:22:20.483054  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:22:20.483073  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:22:20.483096  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:22:20.483130  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:22:20.483154  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:22:20.483167  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:22:20.483178  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:20.483212  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:22:20.485999  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:20.486374  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:22:20.486404  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:20.486603  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:22:20.486820  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:22:20.487011  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:22:20.487139  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:22:20.560956  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 18:22:20.566416  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 18:22:20.581062  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 18:22:20.585730  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 18:22:20.597251  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 18:22:20.601984  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 18:22:20.613162  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 18:22:20.617881  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 18:22:20.628214  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 18:22:20.632470  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 18:22:20.642075  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 18:22:20.646356  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 18:22:20.657213  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:22:20.681109  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:22:20.703426  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:22:20.726808  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:22:20.750263  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 18:22:20.775428  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:22:20.799369  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:22:20.823943  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:22:20.847480  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:22:20.870382  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:22:20.894238  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:22:20.916414  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 18:22:20.941744  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 18:22:20.958242  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 18:22:20.975232  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 18:22:20.991557  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 18:22:21.008259  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 18:22:21.026700  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 18:22:21.045767  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:22:21.051756  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:22:21.063164  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.067910  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.067974  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.073670  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:22:21.084567  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:22:21.095266  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.099539  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.099593  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.105114  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:22:21.115993  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:22:21.127214  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.132019  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.132078  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.137910  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:22:21.148844  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:22:21.153003  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:22:21.153057  411620 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0717 18:22:21.153144  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:22:21.153168  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:22:21.153241  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:22:21.171613  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:22:21.171707  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:22:21.171771  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:22:21.182446  411620 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 18:22:21.182519  411620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 18:22:21.193563  411620 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 18:22:21.193574  411620 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 18:22:21.193561  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 18:22:21.193633  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:22:21.193707  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:22:21.198328  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 18:22:21.198359  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 18:22:22.297113  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:22:22.297206  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:22:22.302199  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 18:22:22.302234  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 18:22:22.512666  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:22:22.538115  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:22:22.538234  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:22:22.545442  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 18:22:22.545486  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 18:22:22.958936  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 18:22:22.968857  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:22:22.986120  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:22:23.003113  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:22:23.020072  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:22:23.023996  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:22:23.036140  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:23.155530  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:22:23.173036  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:22:23.173409  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:23.173475  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:23.188641  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0717 18:22:23.189112  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:23.189587  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:23.189611  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:23.190011  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:23.190247  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:22:23.190450  411620 start.go:317] joinCluster: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:22:23.190573  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 18:22:23.190598  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:22:23.193903  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:23.194385  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:22:23.194416  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:23.194589  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:22:23.194769  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:22:23.194939  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:22:23.195081  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:22:23.356747  411620 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:22:23.356804  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token usqba2.lrked5kopejozm88 --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I0717 18:22:45.630321  411620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token usqba2.lrked5kopejozm88 --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (22.273491175s)
	I0717 18:22:45.630364  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 18:22:46.192092  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282-m02 minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=false
	I0717 18:22:46.313299  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-445282-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 18:22:46.446377  411620 start.go:319] duration metric: took 23.255923711s to joinCluster
	I0717 18:22:46.446481  411620 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:22:46.446836  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:46.448057  411620 out.go:177] * Verifying Kubernetes components...
	I0717 18:22:46.449426  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:46.675775  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:22:46.731102  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:22:46.731356  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 18:22:46.731435  411620 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.147:8443
	I0717 18:22:46.731667  411620 node_ready.go:35] waiting up to 6m0s for node "ha-445282-m02" to be "Ready" ...
	I0717 18:22:46.731771  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:46.731782  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:46.731793  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:46.731805  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:46.740915  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:22:47.232891  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:47.232914  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:47.232922  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:47.232927  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:47.236320  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:47.732030  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:47.732052  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:47.732060  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:47.732065  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:47.736981  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:22:48.232321  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:48.232341  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:48.232349  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:48.232354  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:48.235414  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:48.732177  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:48.732201  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:48.732209  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:48.732217  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:48.735714  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:48.736615  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:49.232003  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:49.232027  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:49.232035  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:49.232039  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:49.235207  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:49.732137  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:49.732161  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:49.732172  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:49.732178  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:49.735720  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:50.232701  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:50.232732  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:50.232745  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:50.232752  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:50.236791  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:22:50.732729  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:50.732753  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:50.732762  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:50.732766  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:50.736335  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:50.736999  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:51.232430  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:51.232455  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:51.232467  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:51.232473  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:51.235591  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:51.732508  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:51.732528  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:51.732540  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:51.732544  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:51.735795  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:52.232856  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:52.232885  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:52.232893  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:52.232898  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:52.236521  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:52.731955  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:52.731977  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:52.731986  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:52.731990  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:52.735025  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:53.232765  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:53.232786  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:53.232795  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:53.232799  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:53.235400  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:53.236117  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:53.732470  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:53.732499  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:53.732507  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:53.732513  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:53.735581  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:54.232362  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:54.232386  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:54.232397  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:54.232404  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:54.236199  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:54.732685  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:54.732710  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:54.732718  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:54.732721  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:54.735994  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:55.232669  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:55.232693  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:55.232704  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:55.232710  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:55.235921  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:55.236460  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:55.732753  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:55.732778  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:55.732789  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:55.732795  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:55.735781  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:56.232861  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:56.232883  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:56.232892  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:56.232900  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:56.236875  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:56.732241  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:56.732264  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:56.732271  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:56.732276  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:56.735270  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:57.232261  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:57.232283  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:57.232291  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:57.232295  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:57.235265  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:57.732172  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:57.732193  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:57.732201  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:57.732208  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:57.735522  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:57.736392  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:58.232336  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:58.232362  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:58.232373  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:58.232380  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:58.235729  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:58.731973  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:58.731996  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:58.732007  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:58.732013  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:58.734822  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:59.231899  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:59.231923  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:59.231934  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:59.231941  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:59.235367  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:59.732702  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:59.732725  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:59.732736  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:59.732741  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:59.735902  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:59.736479  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:23:00.232808  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:00.232834  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:00.232844  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:00.232850  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:00.236175  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:00.732185  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:00.732211  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:00.732222  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:00.732227  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:00.735924  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:01.232837  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:01.232866  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:01.232876  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:01.232881  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:01.236027  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:01.731946  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:01.731970  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:01.731978  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:01.731985  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:01.735059  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:02.232608  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:02.232636  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:02.232648  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:02.232656  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:02.237624  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:02.238784  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:23:02.732902  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:02.732932  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:02.732943  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:02.732949  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:02.736916  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.231916  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:03.231944  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.231955  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.231960  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.238957  411620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 18:23:03.239566  411620 node_ready.go:49] node "ha-445282-m02" has status "Ready":"True"
	I0717 18:23:03.239590  411620 node_ready.go:38] duration metric: took 16.507907061s for node "ha-445282-m02" to be "Ready" ...
	I0717 18:23:03.239602  411620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:23:03.239713  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:03.239726  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.239737  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.239742  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.245311  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:03.252420  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.252519  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28njs
	I0717 18:23:03.252531  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.252540  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.252547  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.255347  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:23:03.255945  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.255961  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.255968  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.255973  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.259166  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.259664  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.259685  411620 pod_ready.go:81] duration metric: took 7.241162ms for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.259700  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.259777  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rzxbr
	I0717 18:23:03.259787  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.259798  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.259807  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.264083  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:03.264753  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.264774  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.264783  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.264790  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.267576  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:23:03.268627  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.268646  411620 pod_ready.go:81] duration metric: took 8.935277ms for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.268655  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.268716  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282
	I0717 18:23:03.268725  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.268732  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.268736  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.272514  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.273732  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.273748  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.273755  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.273758  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.276933  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.277751  411620 pod_ready.go:92] pod "etcd-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.277772  411620 pod_ready.go:81] duration metric: took 9.109829ms for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.277783  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.277844  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m02
	I0717 18:23:03.277854  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.277871  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.277882  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.281985  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:03.282692  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:03.282707  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.282713  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.282717  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.286570  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.287114  411620 pod_ready.go:92] pod "etcd-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.287140  411620 pod_ready.go:81] duration metric: took 9.34744ms for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.287158  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.432569  411620 request.go:629] Waited for 145.334031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:23:03.432644  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:23:03.432649  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.432658  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.432666  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.436375  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.632592  411620 request.go:629] Waited for 195.443141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.632661  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.632666  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.632674  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.632679  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.636332  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.636982  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.637005  411620 pod_ready.go:81] duration metric: took 349.832596ms for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.637016  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.832089  411620 request.go:629] Waited for 194.99822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:23:03.832155  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:23:03.832161  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.832172  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.832178  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.835467  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.032472  411620 request.go:629] Waited for 196.385406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.032576  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.032582  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.032590  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.032598  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.036568  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.037085  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.037105  411620 pod_ready.go:81] duration metric: took 400.081261ms for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.037119  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.232297  411620 request.go:629] Waited for 195.094299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:23:04.232379  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:23:04.232384  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.232392  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.232397  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.235597  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.432692  411620 request.go:629] Waited for 196.36902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:04.432749  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:04.432754  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.432761  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.432766  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.436136  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.436922  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.436948  411620 pod_ready.go:81] duration metric: took 399.821785ms for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.436958  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.631980  411620 request.go:629] Waited for 194.915166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:23:04.632054  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:23:04.632062  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.632073  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.632085  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.635130  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.832306  411620 request.go:629] Waited for 196.372293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.832379  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.832386  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.832398  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.832406  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.835884  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.836310  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.836326  411620 pod_ready.go:81] duration metric: took 399.360617ms for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.836337  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.032499  411620 request.go:629] Waited for 196.065865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:23:05.032575  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:23:05.032580  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.032588  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.032597  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.037228  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:05.232494  411620 request.go:629] Waited for 194.354027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:05.232574  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:05.232593  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.232607  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.232614  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.235981  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.236788  411620 pod_ready.go:92] pod "kube-proxy-vxmp8" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:05.236810  411620 pod_ready.go:81] duration metric: took 400.467642ms for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.236821  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.432728  411620 request.go:629] Waited for 195.789224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:23:05.432813  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:23:05.432825  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.432835  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.432845  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.436657  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.632936  411620 request.go:629] Waited for 195.401534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:05.633003  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:05.633009  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.633016  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.633021  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.636228  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.636856  411620 pod_ready.go:92] pod "kube-proxy-xs65r" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:05.636885  411620 pod_ready.go:81] duration metric: took 400.05579ms for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.636898  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.832889  411620 request.go:629] Waited for 195.892653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:23:05.832952  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:23:05.832956  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.832964  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.832970  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.836805  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.032833  411620 request.go:629] Waited for 195.335122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:06.032896  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:06.032903  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.032914  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.032921  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.036958  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:06.037467  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:06.037485  411620 pod_ready.go:81] duration metric: took 400.575993ms for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.037496  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.232622  411620 request.go:629] Waited for 195.022731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:23:06.232688  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:23:06.232693  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.232701  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.232706  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.236081  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.432069  411620 request.go:629] Waited for 195.338129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:06.432137  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:06.432144  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.432151  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.432155  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.435442  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.436165  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:06.436195  411620 pod_ready.go:81] duration metric: took 398.690878ms for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.436210  411620 pod_ready.go:38] duration metric: took 3.196568559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:23:06.436232  411620 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:23:06.436297  411620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:23:06.453751  411620 api_server.go:72] duration metric: took 20.007218471s to wait for apiserver process to appear ...
	I0717 18:23:06.453783  411620 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:23:06.453815  411620 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0717 18:23:06.458696  411620 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0717 18:23:06.458776  411620 round_trippers.go:463] GET https://192.168.39.147:8443/version
	I0717 18:23:06.458783  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.458791  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.458797  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.459817  411620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 18:23:06.459983  411620 api_server.go:141] control plane version: v1.30.2
	I0717 18:23:06.460003  411620 api_server.go:131] duration metric: took 6.212787ms to wait for apiserver health ...
	I0717 18:23:06.460013  411620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:23:06.632533  411620 request.go:629] Waited for 172.391381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:06.632629  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:06.632639  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.632656  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.632666  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.638297  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:06.642503  411620 system_pods.go:59] 17 kube-system pods found
	I0717 18:23:06.642532  411620 system_pods.go:61] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:23:06.642552  411620 system_pods.go:61] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:23:06.642558  411620 system_pods.go:61] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:23:06.642563  411620 system_pods.go:61] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:23:06.642567  411620 system_pods.go:61] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:23:06.642574  411620 system_pods.go:61] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:23:06.642579  411620 system_pods.go:61] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:23:06.642587  411620 system_pods.go:61] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:23:06.642593  411620 system_pods.go:61] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:23:06.642597  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:23:06.642603  411620 system_pods.go:61] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:23:06.642606  411620 system_pods.go:61] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:23:06.642611  411620 system_pods.go:61] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:23:06.642614  411620 system_pods.go:61] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:23:06.642620  411620 system_pods.go:61] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:23:06.642623  411620 system_pods.go:61] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:23:06.642628  411620 system_pods.go:61] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:23:06.642639  411620 system_pods.go:74] duration metric: took 182.619199ms to wait for pod list to return data ...
	I0717 18:23:06.642649  411620 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:23:06.832036  411620 request.go:629] Waited for 189.29106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:23:06.832148  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:23:06.832162  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.832172  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.832178  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.835330  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.835603  411620 default_sa.go:45] found service account: "default"
	I0717 18:23:06.835627  411620 default_sa.go:55] duration metric: took 192.966758ms for default service account to be created ...
	I0717 18:23:06.835635  411620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:23:07.032871  411620 request.go:629] Waited for 197.140021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:07.032955  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:07.032967  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:07.032976  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:07.032983  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:07.038873  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:07.043367  411620 system_pods.go:86] 17 kube-system pods found
	I0717 18:23:07.043395  411620 system_pods.go:89] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:23:07.043400  411620 system_pods.go:89] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:23:07.043405  411620 system_pods.go:89] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:23:07.043409  411620 system_pods.go:89] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:23:07.043413  411620 system_pods.go:89] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:23:07.043418  411620 system_pods.go:89] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:23:07.043423  411620 system_pods.go:89] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:23:07.043430  411620 system_pods.go:89] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:23:07.043441  411620 system_pods.go:89] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:23:07.043448  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:23:07.043457  411620 system_pods.go:89] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:23:07.043463  411620 system_pods.go:89] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:23:07.043468  411620 system_pods.go:89] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:23:07.043473  411620 system_pods.go:89] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:23:07.043478  411620 system_pods.go:89] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:23:07.043481  411620 system_pods.go:89] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:23:07.043485  411620 system_pods.go:89] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:23:07.043492  411620 system_pods.go:126] duration metric: took 207.85115ms to wait for k8s-apps to be running ...
	I0717 18:23:07.043502  411620 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:23:07.043559  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:23:07.064349  411620 system_svc.go:56] duration metric: took 20.831074ms WaitForService to wait for kubelet
	I0717 18:23:07.064384  411620 kubeadm.go:582] duration metric: took 20.617857546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:23:07.064407  411620 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:23:07.232855  411620 request.go:629] Waited for 168.360051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes
	I0717 18:23:07.232915  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes
	I0717 18:23:07.232920  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:07.232927  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:07.232932  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:07.236514  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:07.237354  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:23:07.237374  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:23:07.237385  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:23:07.237389  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:23:07.237393  411620 node_conditions.go:105] duration metric: took 172.980945ms to run NodePressure ...
	I0717 18:23:07.237405  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:23:07.237432  411620 start.go:255] writing updated cluster config ...
	I0717 18:23:07.239845  411620 out.go:177] 
	I0717 18:23:07.242288  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:07.242385  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:07.244139  411620 out.go:177] * Starting "ha-445282-m03" control-plane node in "ha-445282" cluster
	I0717 18:23:07.245356  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:23:07.245382  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:23:07.245493  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:23:07.245504  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:23:07.245593  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:07.245756  411620 start.go:360] acquireMachinesLock for ha-445282-m03: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:23:07.245799  411620 start.go:364] duration metric: took 22.216µs to acquireMachinesLock for "ha-445282-m03"
	I0717 18:23:07.245813  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:23:07.245958  411620 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 18:23:07.247628  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:23:07.247726  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:07.247765  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:07.263749  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0717 18:23:07.264308  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:07.264900  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:07.264928  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:07.265246  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:07.265467  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:07.265622  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:07.265806  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:23:07.265840  411620 client.go:168] LocalClient.Create starting
	I0717 18:23:07.265882  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:23:07.265925  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:07.265950  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:07.266017  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:23:07.266044  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:07.266065  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:07.266093  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:23:07.266105  411620 main.go:141] libmachine: (ha-445282-m03) Calling .PreCreateCheck
	I0717 18:23:07.266260  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:07.266679  411620 main.go:141] libmachine: Creating machine...
	I0717 18:23:07.266698  411620 main.go:141] libmachine: (ha-445282-m03) Calling .Create
	I0717 18:23:07.266819  411620 main.go:141] libmachine: (ha-445282-m03) Creating KVM machine...
	I0717 18:23:07.268181  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found existing default KVM network
	I0717 18:23:07.268340  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found existing private KVM network mk-ha-445282
	I0717 18:23:07.268466  411620 main.go:141] libmachine: (ha-445282-m03) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 ...
	I0717 18:23:07.268521  411620 main.go:141] libmachine: (ha-445282-m03) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:23:07.268567  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.268445  412407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:23:07.268680  411620 main.go:141] libmachine: (ha-445282-m03) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:23:07.532529  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.532372  412407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa...
	I0717 18:23:07.686598  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.686461  412407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/ha-445282-m03.rawdisk...
	I0717 18:23:07.686654  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Writing magic tar header
	I0717 18:23:07.686670  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Writing SSH key tar header
	I0717 18:23:07.687972  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.687403  412407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 ...
	I0717 18:23:07.688022  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03
	I0717 18:23:07.688046  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 (perms=drwx------)
	I0717 18:23:07.688069  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:23:07.688077  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:23:07.688111  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:23:07.688142  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:23:07.688152  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:23:07.688162  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:23:07.688173  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:23:07.688189  411620 main.go:141] libmachine: (ha-445282-m03) Creating domain...
	I0717 18:23:07.688204  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:23:07.688216  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:23:07.688227  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:23:07.688239  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home
	I0717 18:23:07.688250  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Skipping /home - not owner
	I0717 18:23:07.689253  411620 main.go:141] libmachine: (ha-445282-m03) define libvirt domain using xml: 
	I0717 18:23:07.689275  411620 main.go:141] libmachine: (ha-445282-m03) <domain type='kvm'>
	I0717 18:23:07.689283  411620 main.go:141] libmachine: (ha-445282-m03)   <name>ha-445282-m03</name>
	I0717 18:23:07.689287  411620 main.go:141] libmachine: (ha-445282-m03)   <memory unit='MiB'>2200</memory>
	I0717 18:23:07.689293  411620 main.go:141] libmachine: (ha-445282-m03)   <vcpu>2</vcpu>
	I0717 18:23:07.689298  411620 main.go:141] libmachine: (ha-445282-m03)   <features>
	I0717 18:23:07.689304  411620 main.go:141] libmachine: (ha-445282-m03)     <acpi/>
	I0717 18:23:07.689311  411620 main.go:141] libmachine: (ha-445282-m03)     <apic/>
	I0717 18:23:07.689316  411620 main.go:141] libmachine: (ha-445282-m03)     <pae/>
	I0717 18:23:07.689320  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689326  411620 main.go:141] libmachine: (ha-445282-m03)   </features>
	I0717 18:23:07.689337  411620 main.go:141] libmachine: (ha-445282-m03)   <cpu mode='host-passthrough'>
	I0717 18:23:07.689344  411620 main.go:141] libmachine: (ha-445282-m03)   
	I0717 18:23:07.689349  411620 main.go:141] libmachine: (ha-445282-m03)   </cpu>
	I0717 18:23:07.689377  411620 main.go:141] libmachine: (ha-445282-m03)   <os>
	I0717 18:23:07.689412  411620 main.go:141] libmachine: (ha-445282-m03)     <type>hvm</type>
	I0717 18:23:07.689423  411620 main.go:141] libmachine: (ha-445282-m03)     <boot dev='cdrom'/>
	I0717 18:23:07.689430  411620 main.go:141] libmachine: (ha-445282-m03)     <boot dev='hd'/>
	I0717 18:23:07.689438  411620 main.go:141] libmachine: (ha-445282-m03)     <bootmenu enable='no'/>
	I0717 18:23:07.689445  411620 main.go:141] libmachine: (ha-445282-m03)   </os>
	I0717 18:23:07.689456  411620 main.go:141] libmachine: (ha-445282-m03)   <devices>
	I0717 18:23:07.689467  411620 main.go:141] libmachine: (ha-445282-m03)     <disk type='file' device='cdrom'>
	I0717 18:23:07.689484  411620 main.go:141] libmachine: (ha-445282-m03)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/boot2docker.iso'/>
	I0717 18:23:07.689499  411620 main.go:141] libmachine: (ha-445282-m03)       <target dev='hdc' bus='scsi'/>
	I0717 18:23:07.689515  411620 main.go:141] libmachine: (ha-445282-m03)       <readonly/>
	I0717 18:23:07.689524  411620 main.go:141] libmachine: (ha-445282-m03)     </disk>
	I0717 18:23:07.689534  411620 main.go:141] libmachine: (ha-445282-m03)     <disk type='file' device='disk'>
	I0717 18:23:07.689547  411620 main.go:141] libmachine: (ha-445282-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:23:07.689560  411620 main.go:141] libmachine: (ha-445282-m03)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/ha-445282-m03.rawdisk'/>
	I0717 18:23:07.689568  411620 main.go:141] libmachine: (ha-445282-m03)       <target dev='hda' bus='virtio'/>
	I0717 18:23:07.689596  411620 main.go:141] libmachine: (ha-445282-m03)     </disk>
	I0717 18:23:07.689619  411620 main.go:141] libmachine: (ha-445282-m03)     <interface type='network'>
	I0717 18:23:07.689633  411620 main.go:141] libmachine: (ha-445282-m03)       <source network='mk-ha-445282'/>
	I0717 18:23:07.689641  411620 main.go:141] libmachine: (ha-445282-m03)       <model type='virtio'/>
	I0717 18:23:07.689653  411620 main.go:141] libmachine: (ha-445282-m03)     </interface>
	I0717 18:23:07.689662  411620 main.go:141] libmachine: (ha-445282-m03)     <interface type='network'>
	I0717 18:23:07.689675  411620 main.go:141] libmachine: (ha-445282-m03)       <source network='default'/>
	I0717 18:23:07.689690  411620 main.go:141] libmachine: (ha-445282-m03)       <model type='virtio'/>
	I0717 18:23:07.689714  411620 main.go:141] libmachine: (ha-445282-m03)     </interface>
	I0717 18:23:07.689733  411620 main.go:141] libmachine: (ha-445282-m03)     <serial type='pty'>
	I0717 18:23:07.689744  411620 main.go:141] libmachine: (ha-445282-m03)       <target port='0'/>
	I0717 18:23:07.689754  411620 main.go:141] libmachine: (ha-445282-m03)     </serial>
	I0717 18:23:07.689765  411620 main.go:141] libmachine: (ha-445282-m03)     <console type='pty'>
	I0717 18:23:07.689775  411620 main.go:141] libmachine: (ha-445282-m03)       <target type='serial' port='0'/>
	I0717 18:23:07.689786  411620 main.go:141] libmachine: (ha-445282-m03)     </console>
	I0717 18:23:07.689796  411620 main.go:141] libmachine: (ha-445282-m03)     <rng model='virtio'>
	I0717 18:23:07.689813  411620 main.go:141] libmachine: (ha-445282-m03)       <backend model='random'>/dev/random</backend>
	I0717 18:23:07.689829  411620 main.go:141] libmachine: (ha-445282-m03)     </rng>
	I0717 18:23:07.689854  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689867  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689875  411620 main.go:141] libmachine: (ha-445282-m03)   </devices>
	I0717 18:23:07.689884  411620 main.go:141] libmachine: (ha-445282-m03) </domain>
	I0717 18:23:07.689893  411620 main.go:141] libmachine: (ha-445282-m03) 
	I0717 18:23:07.696417  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:36:6f:ce in network default
	I0717 18:23:07.697018  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring networks are active...
	I0717 18:23:07.697034  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:07.697788  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring network default is active
	I0717 18:23:07.698151  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring network mk-ha-445282 is active
	I0717 18:23:07.698631  411620 main.go:141] libmachine: (ha-445282-m03) Getting domain xml...
	I0717 18:23:07.699442  411620 main.go:141] libmachine: (ha-445282-m03) Creating domain...
	I0717 18:23:08.918772  411620 main.go:141] libmachine: (ha-445282-m03) Waiting to get IP...
	I0717 18:23:08.919514  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:08.919957  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:08.919982  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:08.919927  412407 retry.go:31] will retry after 201.076635ms: waiting for machine to come up
	I0717 18:23:09.122189  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.122604  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.122651  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.122541  412407 retry.go:31] will retry after 360.345672ms: waiting for machine to come up
	I0717 18:23:09.483943  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.484376  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.484401  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.484346  412407 retry.go:31] will retry after 432.877971ms: waiting for machine to come up
	I0717 18:23:09.918549  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.919074  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.919111  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.919014  412407 retry.go:31] will retry after 482.54678ms: waiting for machine to come up
	I0717 18:23:10.402554  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:10.402929  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:10.402961  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:10.402874  412407 retry.go:31] will retry after 711.135179ms: waiting for machine to come up
	I0717 18:23:11.115357  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:11.115766  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:11.115806  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:11.115717  412407 retry.go:31] will retry after 696.130437ms: waiting for machine to come up
	I0717 18:23:11.813497  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:11.813963  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:11.813986  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:11.813907  412407 retry.go:31] will retry after 939.068462ms: waiting for machine to come up
	I0717 18:23:12.754574  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:12.755140  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:12.755193  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:12.755064  412407 retry.go:31] will retry after 1.438891186s: waiting for machine to come up
	I0717 18:23:14.195673  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:14.196027  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:14.196059  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:14.195974  412407 retry.go:31] will retry after 1.408170227s: waiting for machine to come up
	I0717 18:23:15.605804  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:15.606339  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:15.606368  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:15.606293  412407 retry.go:31] will retry after 1.419070639s: waiting for machine to come up
	I0717 18:23:17.027562  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:17.027966  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:17.027996  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:17.027912  412407 retry.go:31] will retry after 2.888338061s: waiting for machine to come up
	I0717 18:23:19.917660  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:19.918126  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:19.918154  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:19.918080  412407 retry.go:31] will retry after 2.69794922s: waiting for machine to come up
	I0717 18:23:22.617809  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:22.618152  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:22.618176  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:22.618109  412407 retry.go:31] will retry after 3.62794328s: waiting for machine to come up
	I0717 18:23:26.249574  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:26.249983  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:26.250006  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:26.249927  412407 retry.go:31] will retry after 5.249456453s: waiting for machine to come up
	I0717 18:23:31.501601  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.502073  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has current primary IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.502103  411620 main.go:141] libmachine: (ha-445282-m03) Found IP for machine: 192.168.39.214
	I0717 18:23:31.502118  411620 main.go:141] libmachine: (ha-445282-m03) Reserving static IP address...
	I0717 18:23:31.502477  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find host DHCP lease matching {name: "ha-445282-m03", mac: "52:54:00:da:b1:51", ip: "192.168.39.214"} in network mk-ha-445282
	I0717 18:23:31.574365  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Getting to WaitForSSH function...
	I0717 18:23:31.574400  411620 main.go:141] libmachine: (ha-445282-m03) Reserved static IP address: 192.168.39.214
	I0717 18:23:31.574414  411620 main.go:141] libmachine: (ha-445282-m03) Waiting for SSH to be available...
	I0717 18:23:31.577012  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.577401  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282
	I0717 18:23:31.577429  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find defined IP address of network mk-ha-445282 interface with MAC address 52:54:00:da:b1:51
	I0717 18:23:31.577556  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH client type: external
	I0717 18:23:31.577582  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa (-rw-------)
	I0717 18:23:31.577656  411620 main.go:141] libmachine: (ha-445282-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:23:31.577680  411620 main.go:141] libmachine: (ha-445282-m03) DBG | About to run SSH command:
	I0717 18:23:31.577695  411620 main.go:141] libmachine: (ha-445282-m03) DBG | exit 0
	I0717 18:23:31.581991  411620 main.go:141] libmachine: (ha-445282-m03) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:23:31.582017  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:23:31.582047  411620 main.go:141] libmachine: (ha-445282-m03) DBG | command : exit 0
	I0717 18:23:31.582070  411620 main.go:141] libmachine: (ha-445282-m03) DBG | err     : exit status 255
	I0717 18:23:31.582098  411620 main.go:141] libmachine: (ha-445282-m03) DBG | output  : 
	I0717 18:23:34.582251  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Getting to WaitForSSH function...
	I0717 18:23:34.584637  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.584990  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.585036  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.585145  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH client type: external
	I0717 18:23:34.585178  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa (-rw-------)
	I0717 18:23:34.585216  411620 main.go:141] libmachine: (ha-445282-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:23:34.585235  411620 main.go:141] libmachine: (ha-445282-m03) DBG | About to run SSH command:
	I0717 18:23:34.585258  411620 main.go:141] libmachine: (ha-445282-m03) DBG | exit 0
	I0717 18:23:34.720617  411620 main.go:141] libmachine: (ha-445282-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 18:23:34.720923  411620 main.go:141] libmachine: (ha-445282-m03) KVM machine creation complete!
	I0717 18:23:34.721281  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:34.721844  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:34.722049  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:34.722202  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:23:34.722219  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:23:34.723492  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:23:34.723510  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:23:34.723518  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:23:34.723533  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.725826  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.726198  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.726231  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.726348  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.726488  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.726646  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.726814  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.727011  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.727244  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.727257  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:23:34.839878  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:34.839911  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:23:34.839921  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.842511  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.842887  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.842911  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.843088  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.843268  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.843424  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.843581  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.843754  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.843923  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.843937  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:23:34.961684  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:23:34.961763  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:23:34.961771  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:23:34.961782  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:34.962054  411620 buildroot.go:166] provisioning hostname "ha-445282-m03"
	I0717 18:23:34.962090  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:34.962341  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.965135  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.965566  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.965593  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.965771  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.965955  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.966129  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.966272  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.966433  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.966671  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.966692  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282-m03 && echo "ha-445282-m03" | sudo tee /etc/hostname
	I0717 18:23:35.095903  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282-m03
	
	I0717 18:23:35.095942  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.098557  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.098886  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.098922  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.099126  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.099336  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.099517  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.099682  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.099856  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:35.100071  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:35.100093  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:23:35.225688  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:35.225719  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:23:35.225738  411620 buildroot.go:174] setting up certificates
	I0717 18:23:35.225751  411620 provision.go:84] configureAuth start
	I0717 18:23:35.225764  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:35.226052  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:35.228671  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.228956  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.228984  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.229126  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.231500  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.231873  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.231899  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.232066  411620 provision.go:143] copyHostCerts
	I0717 18:23:35.232106  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:23:35.232148  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:23:35.232161  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:23:35.232245  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:23:35.232379  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:23:35.232405  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:23:35.232413  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:23:35.232455  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:23:35.232569  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:23:35.232597  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:23:35.232603  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:23:35.232640  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:23:35.232730  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282-m03 san=[127.0.0.1 192.168.39.214 ha-445282-m03 localhost minikube]
	I0717 18:23:35.441554  411620 provision.go:177] copyRemoteCerts
	I0717 18:23:35.441634  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:23:35.441682  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.444232  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.444596  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.444633  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.444869  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.445123  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.445281  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.445410  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:35.530710  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:23:35.530818  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:23:35.556555  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:23:35.556642  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:23:35.583020  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:23:35.583101  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:23:35.608000  411620 provision.go:87] duration metric: took 382.235848ms to configureAuth
	I0717 18:23:35.608030  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:23:35.608241  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:35.608314  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.611002  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.611386  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.611417  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.611570  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.611813  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.612041  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.612199  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.612350  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:35.612576  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:35.612596  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:23:35.886127  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:23:35.886172  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:23:35.886183  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetURL
	I0717 18:23:35.887590  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using libvirt version 6000000
	I0717 18:23:35.889859  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.890222  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.890255  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.890372  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:23:35.890388  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:23:35.890398  411620 client.go:171] duration metric: took 28.624547488s to LocalClient.Create
	I0717 18:23:35.890427  411620 start.go:167] duration metric: took 28.624622446s to libmachine.API.Create "ha-445282"
	I0717 18:23:35.890440  411620 start.go:293] postStartSetup for "ha-445282-m03" (driver="kvm2")
	I0717 18:23:35.890455  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:23:35.890491  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:35.890754  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:23:35.890776  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.892685  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.893019  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.893045  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.893179  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.893376  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.893559  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.893722  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:35.979823  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:23:35.984380  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:23:35.984406  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:23:35.984471  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:23:35.984588  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:23:35.984598  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:23:35.984689  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:23:35.994509  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:23:36.020925  411620 start.go:296] duration metric: took 130.467328ms for postStartSetup
	I0717 18:23:36.021000  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:36.021689  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:36.024364  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.024740  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.024763  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.025035  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:36.025250  411620 start.go:128] duration metric: took 28.779273648s to createHost
	I0717 18:23:36.025278  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:36.027479  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.027855  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.027882  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.028023  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.028204  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.028355  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.028545  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.028700  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:36.028908  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:36.028923  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:23:36.145672  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240616.127894753
	
	I0717 18:23:36.145710  411620 fix.go:216] guest clock: 1721240616.127894753
	I0717 18:23:36.145720  411620 fix.go:229] Guest: 2024-07-17 18:23:36.127894753 +0000 UTC Remote: 2024-07-17 18:23:36.025262913 +0000 UTC m=+158.624940901 (delta=102.63184ms)
	I0717 18:23:36.145744  411620 fix.go:200] guest clock delta is within tolerance: 102.63184ms
	I0717 18:23:36.145750  411620 start.go:83] releasing machines lock for "ha-445282-m03", held for 28.899944415s
	I0717 18:23:36.145779  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.146142  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:36.148785  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.149154  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.149188  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.151822  411620 out.go:177] * Found network options:
	I0717 18:23:36.153314  411620 out.go:177]   - NO_PROXY=192.168.39.147,192.168.39.198
	W0717 18:23:36.154591  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 18:23:36.154611  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:23:36.154627  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155321  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155552  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155639  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:23:36.155689  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	W0717 18:23:36.155809  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 18:23:36.155833  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:23:36.155911  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:23:36.155932  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:36.158623  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.158789  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159055  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.159084  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159224  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.159251  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.159258  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159387  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.159470  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.159539  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.159609  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.159661  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.159725  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:36.159761  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:36.400733  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:23:36.406828  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:23:36.406914  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:23:36.423355  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:23:36.423381  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:23:36.423454  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:23:36.439909  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:23:36.454185  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:23:36.454250  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:23:36.468126  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:23:36.481535  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:23:36.596112  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:23:36.749997  411620 docker.go:233] disabling docker service ...
	I0717 18:23:36.750085  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:23:36.764921  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:23:36.779059  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:23:36.915600  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:23:37.026893  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:23:37.042207  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:23:37.061833  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:23:37.061917  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.073663  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:23:37.073732  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.085373  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.096230  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.107687  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:23:37.119064  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.130276  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.148769  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.159195  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:23:37.169178  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:23:37.169235  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:23:37.183378  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:23:37.192909  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:37.304732  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:23:37.451054  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:23:37.451138  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:23:37.456509  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:23:37.456565  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:23:37.460458  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:23:37.507517  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:23:37.507597  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:23:37.538306  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:23:37.573280  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:23:37.574440  411620 out.go:177]   - env NO_PROXY=192.168.39.147
	I0717 18:23:37.575673  411620 out.go:177]   - env NO_PROXY=192.168.39.147,192.168.39.198
	I0717 18:23:37.576672  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:37.579447  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:37.579942  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:37.579977  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:37.580196  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:23:37.584592  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:37.597296  411620 mustload.go:65] Loading cluster: ha-445282
	I0717 18:23:37.597507  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:37.597758  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:37.597801  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:37.613675  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
	I0717 18:23:37.614095  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:37.614531  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:37.614559  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:37.614892  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:37.615095  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:23:37.616611  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:23:37.616934  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:37.616968  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:37.631684  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0717 18:23:37.632122  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:37.632615  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:37.632639  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:37.632937  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:37.633141  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:23:37.633320  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.214
	I0717 18:23:37.633334  411620 certs.go:194] generating shared ca certs ...
	I0717 18:23:37.633357  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:37.633494  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:23:37.633529  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:23:37.633538  411620 certs.go:256] generating profile certs ...
	I0717 18:23:37.633608  411620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:23:37.633638  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2
	I0717 18:23:37.633653  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.214 192.168.39.254]
	I0717 18:23:38.109453  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 ...
	I0717 18:23:38.109485  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2: {Name:mkdb824e5b55da3266aa6f37148aafce183da162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:38.109692  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2 ...
	I0717 18:23:38.109712  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2: {Name:mk56670ee8ee75e573097f8cc3976a91e07aaece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:38.109820  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:23:38.109969  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:23:38.110131  411620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:23:38.110154  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:23:38.110173  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:23:38.110192  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:23:38.110210  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:23:38.110228  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:23:38.110245  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:23:38.110262  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:23:38.110279  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:23:38.110343  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:23:38.110382  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:23:38.110394  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:23:38.110427  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:23:38.110459  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:23:38.110490  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:23:38.110542  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:23:38.110580  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.110609  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.110627  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.110671  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:23:38.114085  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:38.114566  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:23:38.114597  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:38.114810  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:23:38.115044  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:23:38.115219  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:23:38.115365  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:23:38.188927  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 18:23:38.194366  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 18:23:38.206584  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 18:23:38.211291  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 18:23:38.221523  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 18:23:38.225778  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 18:23:38.236121  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 18:23:38.240239  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 18:23:38.251927  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 18:23:38.256162  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 18:23:38.266944  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 18:23:38.271768  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 18:23:38.282802  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:23:38.308765  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:23:38.334255  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:23:38.359295  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:23:38.383022  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 18:23:38.410871  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:23:38.435726  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:23:38.461125  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:23:38.485187  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:23:38.510887  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:23:38.536966  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:23:38.563106  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 18:23:38.580790  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 18:23:38.598393  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 18:23:38.616059  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 18:23:38.633015  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 18:23:38.649426  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 18:23:38.666226  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 18:23:38.683149  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:23:38.689111  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:23:38.701073  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.705929  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.705999  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.712084  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:23:38.722985  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:23:38.734081  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.738843  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.738901  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.744576  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:23:38.755741  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:23:38.766405  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.771070  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.771119  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.777098  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:23:38.787460  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:23:38.791509  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:23:38.791566  411620 kubeadm.go:934] updating node {m03 192.168.39.214 8443 v1.30.2 crio true true} ...
	I0717 18:23:38.791711  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:23:38.791742  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:23:38.791777  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:23:38.807319  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:23:38.807395  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:23:38.807454  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:23:38.818576  411620 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 18:23:38.818639  411620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 18:23:38.828511  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 18:23:38.828542  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:23:38.828548  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 18:23:38.828573  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 18:23:38.828593  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:23:38.828595  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:23:38.828622  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:23:38.828653  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:23:38.843334  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:23:38.843355  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 18:23:38.843374  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 18:23:38.843419  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:23:38.843456  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 18:23:38.843486  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 18:23:38.859958  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 18:23:38.860016  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 18:23:39.759339  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 18:23:39.769905  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:23:39.788059  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:23:39.804267  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:23:39.820446  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:23:39.824470  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:39.836911  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:39.959606  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:23:39.977993  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:23:39.978393  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:39.978448  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:39.994038  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0717 18:23:39.994617  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:39.995123  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:39.995147  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:39.995517  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:39.995715  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:23:39.995910  411620 start.go:317] joinCluster: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:23:39.996068  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 18:23:39.996089  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:23:39.999078  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:39.999597  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:23:39.999626  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:39.999780  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:23:39.999974  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:23:40.000144  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:23:40.000299  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:23:40.173669  411620 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:23:40.173723  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsggqp.pqujppmj7tj4ps2p --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m03 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443"
	I0717 18:24:04.316247  411620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsggqp.pqujppmj7tj4ps2p --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m03 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443": (24.142488446s)
	I0717 18:24:04.316288  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 18:24:04.916010  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282-m03 minikube.k8s.io/updated_at=2024_07_17T18_24_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=false
	I0717 18:24:05.051194  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-445282-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 18:24:05.196094  411620 start.go:319] duration metric: took 25.200179282s to joinCluster
	I0717 18:24:05.196187  411620 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:24:05.196562  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:24:05.197861  411620 out.go:177] * Verifying Kubernetes components...
	I0717 18:24:05.199310  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:24:05.426302  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:24:05.444554  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:24:05.444810  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 18:24:05.444878  411620 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.147:8443
	I0717 18:24:05.445090  411620 node_ready.go:35] waiting up to 6m0s for node "ha-445282-m03" to be "Ready" ...
	I0717 18:24:05.445180  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:05.445189  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:05.445197  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:05.445201  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:05.448758  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:05.945817  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:05.945851  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:05.945863  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:05.945868  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:05.950088  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:06.445797  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:06.445823  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:06.445835  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:06.445840  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:06.450734  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:06.945735  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:06.945766  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:06.945779  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:06.945787  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:06.949746  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:07.445759  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:07.445782  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:07.445790  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:07.445796  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:07.450492  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:07.451076  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:07.945782  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:07.945811  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:07.945829  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:07.945874  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:07.950594  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:08.446029  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:08.446056  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:08.446067  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:08.446072  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:08.449253  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:08.946045  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:08.946074  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:08.946085  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:08.946092  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:08.949575  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:09.445390  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:09.445416  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:09.445446  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:09.445455  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:09.451340  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:24:09.452026  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:09.945300  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:09.945324  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:09.945333  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:09.945339  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:09.948651  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:10.445299  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:10.445327  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:10.445336  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:10.445341  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:10.448853  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:10.946318  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:10.946341  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:10.946350  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:10.946354  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:10.950605  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:11.445437  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:11.445457  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:11.445465  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:11.445469  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:11.448314  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:11.945805  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:11.945833  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:11.945844  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:11.945852  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:11.949297  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:11.950044  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:12.445974  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:12.445995  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:12.446003  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:12.446008  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:12.449645  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:12.945772  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:12.945797  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:12.945805  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:12.945810  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:12.949538  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:13.445755  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:13.445783  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:13.445793  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:13.445800  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:13.449093  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:13.945786  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:13.945810  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:13.945819  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:13.945824  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:13.955336  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:24:13.955998  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:14.445729  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:14.445753  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:14.445761  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:14.445765  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:14.449626  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:14.945601  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:14.945624  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:14.945633  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:14.945637  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:14.949007  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:15.446240  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:15.446276  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:15.446288  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:15.446295  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:15.450690  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:15.945372  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:15.945405  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:15.945417  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:15.945447  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:15.949002  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:16.445892  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:16.445916  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:16.445924  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:16.445928  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:16.451015  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:24:16.452254  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:16.945603  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:16.945638  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:16.945645  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:16.945649  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:16.948855  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:17.445613  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:17.445645  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:17.445653  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:17.445658  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:17.449138  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:17.946299  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:17.946320  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:17.946328  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:17.946332  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:17.949583  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.446076  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:18.446099  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:18.446109  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:18.446116  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:18.449728  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.945959  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:18.945983  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:18.945992  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:18.945996  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:18.949235  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.950377  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:19.445368  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:19.445393  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:19.445401  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:19.445406  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:19.448628  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:19.945566  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:19.945585  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:19.945594  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:19.945599  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:19.948591  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:20.445963  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:20.445985  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:20.445994  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:20.445998  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:20.449184  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:20.945346  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:20.945378  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:20.945390  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:20.945397  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:20.948588  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.445540  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.445566  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.445577  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.445582  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.448809  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.449350  411620 node_ready.go:49] node "ha-445282-m03" has status "Ready":"True"
	I0717 18:24:21.449369  411620 node_ready.go:38] duration metric: took 16.004266077s for node "ha-445282-m03" to be "Ready" ...
	I0717 18:24:21.449379  411620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:24:21.449444  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:21.449455  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.449463  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.449466  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.456554  411620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 18:24:21.463187  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.463285  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28njs
	I0717 18:24:21.463297  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.463308  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.463317  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.466094  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.466745  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.466765  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.466773  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.466778  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.469116  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.469603  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.469624  411620 pod_ready.go:81] duration metric: took 6.413174ms for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.469633  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.469679  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rzxbr
	I0717 18:24:21.469686  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.469693  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.469698  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.471997  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.472604  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.472619  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.472626  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.472630  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.474786  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.475349  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.475367  411620 pod_ready.go:81] duration metric: took 5.728266ms for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.475378  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.475439  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282
	I0717 18:24:21.475449  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.475458  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.475468  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.477535  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.478072  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.478088  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.478097  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.478102  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.480010  411620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 18:24:21.480446  411620 pod_ready.go:92] pod "etcd-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.480462  411620 pod_ready.go:81] duration metric: took 5.076646ms for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.480471  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.480563  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m02
	I0717 18:24:21.480574  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.480581  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.480585  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.482764  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.483334  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:21.483349  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.483356  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.483361  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.485850  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.486312  411620 pod_ready.go:92] pod "etcd-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.486331  411620 pod_ready.go:81] duration metric: took 5.85437ms for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.486338  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.645659  411620 request.go:629] Waited for 159.250572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m03
	I0717 18:24:21.645933  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m03
	I0717 18:24:21.645939  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.645948  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.645957  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.649393  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.846458  411620 request.go:629] Waited for 196.367585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.846529  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.846542  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.846553  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.846565  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.857374  411620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 18:24:21.859263  411620 pod_ready.go:92] pod "etcd-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.859285  411620 pod_ready.go:81] duration metric: took 372.93962ms for pod "etcd-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.859313  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.046604  411620 request.go:629] Waited for 187.17368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:24:22.046678  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:24:22.046685  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.046698  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.046706  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.049974  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.246176  411620 request.go:629] Waited for 195.358968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:22.246236  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:22.246241  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.246251  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.246256  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.249677  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.250186  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:22.250208  411620 pod_ready.go:81] duration metric: took 390.884341ms for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.250218  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.445786  411620 request.go:629] Waited for 195.464948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:24:22.445864  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:24:22.445874  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.445890  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.445897  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.449286  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.646397  411620 request.go:629] Waited for 196.159395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:22.646453  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:22.646457  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.646465  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.646468  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.649637  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.650129  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:22.650148  411620 pod_ready.go:81] duration metric: took 399.920158ms for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.650158  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.846197  411620 request.go:629] Waited for 195.965297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m03
	I0717 18:24:22.846298  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m03
	I0717 18:24:22.846305  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.846314  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.846320  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.849544  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.046481  411620 request.go:629] Waited for 196.035999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:23.046541  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:23.046545  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.046553  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.046556  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.049743  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.050573  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.050590  411620 pod_ready.go:81] duration metric: took 400.426327ms for pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.050600  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.246181  411620 request.go:629] Waited for 195.488267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:24:23.246264  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:24:23.246272  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.246284  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.246294  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.250246  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.446240  411620 request.go:629] Waited for 195.35445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:23.446332  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:23.446344  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.446353  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.446362  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.449334  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:23.450071  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.450094  411620 pod_ready.go:81] duration metric: took 399.486233ms for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.450108  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.646580  411620 request.go:629] Waited for 196.393708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:24:23.646684  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:24:23.646692  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.646703  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.646715  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.650140  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.846516  411620 request.go:629] Waited for 195.399684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:23.846600  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:23.846606  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.846614  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.846618  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.850347  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.850968  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.850988  411620 pod_ready.go:81] duration metric: took 400.873337ms for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.850999  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.046021  411620 request.go:629] Waited for 194.938571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m03
	I0717 18:24:24.046093  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m03
	I0717 18:24:24.046101  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.046110  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.046115  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.049580  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.245624  411620 request.go:629] Waited for 195.287009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:24.245688  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:24.245693  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.245700  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.245704  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.249120  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.249804  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:24.249824  411620 pod_ready.go:81] duration metric: took 398.817754ms for pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.249838  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.445919  411620 request.go:629] Waited for 195.975163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:24:24.445996  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:24:24.446003  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.446011  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.446017  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.449796  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.646104  411620 request.go:629] Waited for 195.35989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:24.646167  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:24.646172  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.646180  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.646184  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.649709  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.650323  411620 pod_ready.go:92] pod "kube-proxy-vxmp8" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:24.650344  411620 pod_ready.go:81] duration metric: took 400.498641ms for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.650358  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.846293  411620 request.go:629] Waited for 195.837794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:24:24.846409  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:24:24.846420  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.846438  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.846448  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.849634  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.045764  411620 request.go:629] Waited for 195.397847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:25.045823  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:25.045828  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.045837  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.045841  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.049064  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.049800  411620 pod_ready.go:92] pod "kube-proxy-xs65r" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.049819  411620 pod_ready.go:81] duration metric: took 399.450493ms for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.049829  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zb54p" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.245791  411620 request.go:629] Waited for 195.887447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zb54p
	I0717 18:24:25.245881  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zb54p
	I0717 18:24:25.245892  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.245903  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.245910  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.249711  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.446241  411620 request.go:629] Waited for 195.755955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:25.446304  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:25.446309  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.446317  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.446324  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.449659  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.450286  411620 pod_ready.go:92] pod "kube-proxy-zb54p" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.450305  411620 pod_ready.go:81] duration metric: took 400.470675ms for pod "kube-proxy-zb54p" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.450314  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.646534  411620 request.go:629] Waited for 196.092786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:24:25.646632  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:24:25.646644  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.646655  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.646665  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.650065  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.846125  411620 request.go:629] Waited for 195.372135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:25.846194  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:25.846204  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.846218  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.846228  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.849639  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.850231  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.850249  411620 pod_ready.go:81] duration metric: took 399.928986ms for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.850260  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.046335  411620 request.go:629] Waited for 195.99919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:24:26.046402  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:24:26.046408  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.046416  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.046421  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.049721  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.246000  411620 request.go:629] Waited for 195.358558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:26.246081  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:26.246088  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.246096  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.246102  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.249505  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.250004  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:26.250026  411620 pod_ready.go:81] duration metric: took 399.755503ms for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.250040  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.446110  411620 request.go:629] Waited for 195.960662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m03
	I0717 18:24:26.446191  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m03
	I0717 18:24:26.446200  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.446212  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.446222  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.449554  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.645601  411620 request.go:629] Waited for 195.230272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:26.645666  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:26.645672  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.645682  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.645687  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.648754  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.649370  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:26.649388  411620 pod_ready.go:81] duration metric: took 399.340756ms for pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.649401  411620 pod_ready.go:38] duration metric: took 5.200011858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:24:26.649417  411620 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:24:26.649473  411620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:24:26.665361  411620 api_server.go:72] duration metric: took 21.469138503s to wait for apiserver process to appear ...
	I0717 18:24:26.665384  411620 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:24:26.665403  411620 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0717 18:24:26.669685  411620 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0717 18:24:26.669747  411620 round_trippers.go:463] GET https://192.168.39.147:8443/version
	I0717 18:24:26.669755  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.669763  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.669769  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.670788  411620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 18:24:26.670863  411620 api_server.go:141] control plane version: v1.30.2
	I0717 18:24:26.670884  411620 api_server.go:131] duration metric: took 5.48806ms to wait for apiserver health ...
	I0717 18:24:26.670898  411620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:24:26.846316  411620 request.go:629] Waited for 175.328812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:26.846382  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:26.846387  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.846395  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.846402  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.853526  411620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 18:24:26.860951  411620 system_pods.go:59] 24 kube-system pods found
	I0717 18:24:26.860989  411620 system_pods.go:61] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:24:26.860995  411620 system_pods.go:61] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:24:26.861000  411620 system_pods.go:61] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:24:26.861005  411620 system_pods.go:61] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:24:26.861010  411620 system_pods.go:61] "etcd-ha-445282-m03" [9621969a-6d14-4d47-92b2-c5dc4a2ca531] Running
	I0717 18:24:26.861014  411620 system_pods.go:61] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:24:26.861020  411620 system_pods.go:61] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:24:26.861027  411620 system_pods.go:61] "kindnet-x62t5" [1045c2e4-d4c7-43be-8050-caed7eecc2a7] Running
	I0717 18:24:26.861036  411620 system_pods.go:61] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:24:26.861042  411620 system_pods.go:61] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:24:26.861048  411620 system_pods.go:61] "kube-apiserver-ha-445282-m03" [40ca072c-1516-4ba2-9224-35b7457e06eb] Running
	I0717 18:24:26.861054  411620 system_pods.go:61] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:24:26.861060  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:24:26.861066  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m03" [438e8ce2-42b4-4ba1-8982-cc91043c6025] Running
	I0717 18:24:26.861074  411620 system_pods.go:61] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:24:26.861079  411620 system_pods.go:61] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:24:26.861087  411620 system_pods.go:61] "kube-proxy-zb54p" [4f525f13-19ee-4a9a-a898-3fc33539d368] Running
	I0717 18:24:26.861092  411620 system_pods.go:61] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:24:26.861098  411620 system_pods.go:61] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:24:26.861104  411620 system_pods.go:61] "kube-scheduler-ha-445282-m03" [efca200e-c509-4fe1-aae4-35805a8a1b79] Running
	I0717 18:24:26.861109  411620 system_pods.go:61] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:24:26.861114  411620 system_pods.go:61] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:24:26.861121  411620 system_pods.go:61] "kube-vip-ha-445282-m03" [11e685c6-4c65-4e8d-9d63-929d7efb2140] Running
	I0717 18:24:26.861125  411620 system_pods.go:61] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:24:26.861134  411620 system_pods.go:74] duration metric: took 190.225321ms to wait for pod list to return data ...
	I0717 18:24:26.861149  411620 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:24:27.046590  411620 request.go:629] Waited for 185.348094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:24:27.046673  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:24:27.046682  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.046692  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.046704  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.050119  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:27.050236  411620 default_sa.go:45] found service account: "default"
	I0717 18:24:27.050250  411620 default_sa.go:55] duration metric: took 189.094114ms for default service account to be created ...
	I0717 18:24:27.050258  411620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:24:27.245634  411620 request.go:629] Waited for 195.301482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:27.245718  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:27.245724  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.245730  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.245736  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.252192  411620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 18:24:27.258652  411620 system_pods.go:86] 24 kube-system pods found
	I0717 18:24:27.258681  411620 system_pods.go:89] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:24:27.258687  411620 system_pods.go:89] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:24:27.258691  411620 system_pods.go:89] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:24:27.258695  411620 system_pods.go:89] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:24:27.258700  411620 system_pods.go:89] "etcd-ha-445282-m03" [9621969a-6d14-4d47-92b2-c5dc4a2ca531] Running
	I0717 18:24:27.258705  411620 system_pods.go:89] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:24:27.258711  411620 system_pods.go:89] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:24:27.258717  411620 system_pods.go:89] "kindnet-x62t5" [1045c2e4-d4c7-43be-8050-caed7eecc2a7] Running
	I0717 18:24:27.258722  411620 system_pods.go:89] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:24:27.258730  411620 system_pods.go:89] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:24:27.258737  411620 system_pods.go:89] "kube-apiserver-ha-445282-m03" [40ca072c-1516-4ba2-9224-35b7457e06eb] Running
	I0717 18:24:27.258745  411620 system_pods.go:89] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:24:27.258756  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:24:27.258762  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m03" [438e8ce2-42b4-4ba1-8982-cc91043c6025] Running
	I0717 18:24:27.258768  411620 system_pods.go:89] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:24:27.258772  411620 system_pods.go:89] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:24:27.258777  411620 system_pods.go:89] "kube-proxy-zb54p" [4f525f13-19ee-4a9a-a898-3fc33539d368] Running
	I0717 18:24:27.258781  411620 system_pods.go:89] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:24:27.258786  411620 system_pods.go:89] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:24:27.258789  411620 system_pods.go:89] "kube-scheduler-ha-445282-m03" [efca200e-c509-4fe1-aae4-35805a8a1b79] Running
	I0717 18:24:27.258794  411620 system_pods.go:89] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:24:27.258798  411620 system_pods.go:89] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:24:27.258802  411620 system_pods.go:89] "kube-vip-ha-445282-m03" [11e685c6-4c65-4e8d-9d63-929d7efb2140] Running
	I0717 18:24:27.258806  411620 system_pods.go:89] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:24:27.258812  411620 system_pods.go:126] duration metric: took 208.548733ms to wait for k8s-apps to be running ...
	I0717 18:24:27.258823  411620 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:24:27.258884  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:24:27.274922  411620 system_svc.go:56] duration metric: took 16.088371ms WaitForService to wait for kubelet
	I0717 18:24:27.274955  411620 kubeadm.go:582] duration metric: took 22.078733901s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:24:27.274984  411620 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:24:27.446213  411620 request.go:629] Waited for 171.128406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes
	I0717 18:24:27.446399  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes
	I0717 18:24:27.446424  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.446436  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.446441  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.450859  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:27.452709  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452729  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452741  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452745  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452748  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452751  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452755  411620 node_conditions.go:105] duration metric: took 177.766473ms to run NodePressure ...
	I0717 18:24:27.452766  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:24:27.452796  411620 start.go:255] writing updated cluster config ...
	I0717 18:24:27.453063  411620 ssh_runner.go:195] Run: rm -f paused
	I0717 18:24:27.505257  411620 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:24:27.507135  411620 out.go:177] * Done! kubectl is now configured to use "ha-445282" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.648544246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240886648404909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f300c21-cc64-49db-a094-25df75e37ce1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.649191640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0049b316-efd1-4451-abd8-a11c01f045cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.649246106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0049b316-efd1-4451-abd8-a11c01f045cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.649609919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0049b316-efd1-4451-abd8-a11c01f045cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.689070760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e66d04a-720e-42c4-b069-a7da7b2f6f7e name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.689160558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e66d04a-720e-42c4-b069-a7da7b2f6f7e name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.690125871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb26750b-8ad7-4c2b-9ab8-5fca86c1bdb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.690641632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240886690607398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb26750b-8ad7-4c2b-9ab8-5fca86c1bdb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.691241523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d217673-4b36-4cd1-b9a8-3d87adecb7ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.691301911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d217673-4b36-4cd1-b9a8-3d87adecb7ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.691590981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d217673-4b36-4cd1-b9a8-3d87adecb7ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.737176594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a8bdf55-1017-41ee-a18a-d25e38aa313b name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.737263590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a8bdf55-1017-41ee-a18a-d25e38aa313b name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.738501889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d88ae7f7-2cb9-41be-9c08-8493af4221ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.739110643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240886739086657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d88ae7f7-2cb9-41be-9c08-8493af4221ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.739719182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4defdde-2429-4116-9668-784f289a3a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.739875762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4defdde-2429-4116-9668-784f289a3a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.740190572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4defdde-2429-4116-9668-784f289a3a1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.786199607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a38c5758-8e44-40ca-9388-1576c314853a name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.786278161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a38c5758-8e44-40ca-9388-1576c314853a name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.788051809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9af1a182-a966-4323-977e-ce3100b5ae68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.788803048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240886788774670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9af1a182-a966-4323-977e-ce3100b5ae68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.789377479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0840a4a8-f4d9-4988-89b2-7e6133fdb2a3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.789562176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0840a4a8-f4d9-4988-89b2-7e6133fdb2a3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:06 ha-445282 crio[683]: time="2024-07-17 18:28:06.789846141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0840a4a8-f4d9-4988-89b2-7e6133fdb2a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	46bb59b8c88a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c6775eb0d5980       busybox-fc5497c4f-mcsw8
	54ce94edc9034       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   5dcf3fb8a7f3f       storage-provisioner
	408ccf9c4f5cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7904758cf99a7       coredns-7db6d8ff4d-rzxbr
	9c8f03436294a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   1b4104fef2aba       coredns-7db6d8ff4d-28njs
	6e8619164a43b       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    6 minutes ago       Running             kindnet-cni               0                   ea48366339cf7       kindnet-75gcw
	ab95f55f84d8d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      6 minutes ago       Running             kube-proxy                0                   9798b06dd09f9       kube-proxy-vxmp8
	ac29ebebce093       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   180a789b714bd       kube-vip-ha-445282
	09fdf7de5bf8c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   d2f7bf6b169d4       etcd-ha-445282
	608260c5da265       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      6 minutes ago       Running             kube-apiserver            0                   e46a9bac3bd93       kube-apiserver-ha-445282
	f910525936daa       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      6 minutes ago       Running             kube-controller-manager   0                   c34972633700d       kube-controller-manager-ha-445282
	585303a41caea       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      6 minutes ago       Running             kube-scheduler            0                   b5b8e1d746c8d       kube-scheduler-ha-445282
	
	
	==> coredns [408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570] <==
	[INFO] 10.244.1.2:57634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274634s
	[INFO] 10.244.1.2:60887 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000345633s
	[INFO] 10.244.1.2:46939 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198474s
	[INFO] 10.244.1.2:42067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193888s
	[INFO] 10.244.1.2:38612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103227s
	[INFO] 10.244.0.4:44523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001703135s
	[INFO] 10.244.0.4:59477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107361s
	[INFO] 10.244.0.4:56198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108839s
	[INFO] 10.244.0.4:38398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004501s
	[INFO] 10.244.0.4:41070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061061s
	[INFO] 10.244.2.2:37193 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186169s
	[INFO] 10.244.2.2:47175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259008s
	[INFO] 10.244.2.2:43118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117844s
	[INFO] 10.244.2.2:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104875s
	[INFO] 10.244.1.2:43839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163961s
	[INFO] 10.244.1.2:57262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014754s
	[INFO] 10.244.1.2:59861 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089161s
	[INFO] 10.244.0.4:35507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101753s
	[INFO] 10.244.0.4:50990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048865s
	[INFO] 10.244.2.2:35692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101106s
	[INFO] 10.244.2.2:47438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106571s
	[INFO] 10.244.0.4:37290 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140704s
	[INFO] 10.244.0.4:37755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145358s
	[INFO] 10.244.2.2:58729 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097845s
	[INFO] 10.244.2.2:47405 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008526s
	
	
	==> coredns [9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1] <==
	[INFO] 10.244.1.2:35140 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013649006s
	[INFO] 10.244.0.4:49386 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00164129s
	[INFO] 10.244.1.2:55522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193989s
	[INFO] 10.244.1.2:35380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257865s
	[INFO] 10.244.1.2:59627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.080250702s
	[INFO] 10.244.0.4:51929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136107s
	[INFO] 10.244.0.4:36818 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096811s
	[INFO] 10.244.0.4:42583 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001301585s
	[INFO] 10.244.2.2:59932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203977s
	[INFO] 10.244.2.2:50906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207365s
	[INFO] 10.244.2.2:41438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168363s
	[INFO] 10.244.2.2:47479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170645s
	[INFO] 10.244.1.2:54595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208251s
	[INFO] 10.244.0.4:34251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081496s
	[INFO] 10.244.0.4:35201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063768s
	[INFO] 10.244.2.2:50926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154679s
	[INFO] 10.244.2.2:39243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122767s
	[INFO] 10.244.1.2:50770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014514s
	[INFO] 10.244.1.2:37706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166071s
	[INFO] 10.244.1.2:53197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306441s
	[INFO] 10.244.1.2:34142 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128366s
	[INFO] 10.244.0.4:60617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102661s
	[INFO] 10.244.0.4:54474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060033s
	[INFO] 10.244.2.2:50977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014662s
	[INFO] 10.244.2.2:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013261s
	
	
	==> describe nodes <==
	Name:               ha-445282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:21:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:27:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-445282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1ea799c4fd84c5c8c95385b6a2349f7
	  System UUID:                d1ea799c-4fd8-4c5c-8c95-385b6a2349f7
	  Boot ID:                    58e8f531-06d1-4b66-9fa8-93cd9d417ce6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mcsw8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 coredns-7db6d8ff4d-28njs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m15s
	  kube-system                 coredns-7db6d8ff4d-rzxbr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m15s
	  kube-system                 etcd-ha-445282                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m28s
	  kube-system                 kindnet-75gcw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-445282             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-controller-manager-ha-445282    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-vxmp8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-445282             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-vip-ha-445282                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-445282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-445282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-445282 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal  NodeReady                5m57s  kubelet          Node ha-445282 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal  RegisteredNode           3m49s  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	
	
	Name:               ha-445282-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:22:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:25:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-445282-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dee104babdb45fe968765f68a06ccd6
	  System UUID:                5dee104b-abdb-45fe-9687-65f68a06ccd6
	  Boot ID:                    13d26f90-4583-404e-9e97-b1d855b45a85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blwvw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 etcd-ha-445282-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-mdqdz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-445282-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-ha-445282-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-xs65r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-445282-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-vip-ha-445282-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m25s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m25s)  kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m25s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-445282-m02 status is now: NodeNotReady
	
	
	Name:               ha-445282-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_24_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:28:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-445282-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	System Info:
	  Machine ID:                 a37bfc2af28c4be69cd12d6b627c60fb
	  System UUID:                a37bfc2a-f28c-4be6-9cd1-2d6b627c60fb
	  Boot ID:                    f7c1c0dd-d81b-4bd7-a98f-9c81b86ac22c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjpp8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 etcd-ha-445282-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m4s
	  kube-system                 kindnet-x62t5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ha-445282-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-controller-manager-ha-445282-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-zb54p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ha-445282-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-vip-ha-445282-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-445282-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal  RegisteredNode           3m49s                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	
	
	Name:               ha-445282-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_25_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:27:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-445282-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55cbb1c4afb849b39c587987c52eb826
	  System UUID:                55cbb1c4-afb8-49b3-9c58-7987c52eb826
	  Boot ID:                    11204469-5192-445f-805c-e983f155f9ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nx7rb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m2s
	  kube-system                 kube-proxy-jstdw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-445282-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050023] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040164] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.526561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.440415] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.613050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.891308] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.193800] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.120214] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.274662] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.047178] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.805512] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055406] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996103] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.082270] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.197381] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.192890] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 18:22] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943] <==
	{"level":"warn","ts":"2024-07-17T18:28:06.993809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.080618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.093084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.093916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.100858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.11879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.128678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.136658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.141382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.149802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.165508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.175271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.184855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.188685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.192282Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.194609Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.200364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.206966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.223691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.231695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.24067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.252699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.263682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.277836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:28:07.295619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:28:07 up 7 min,  0 users,  load average: 0.15, 0.26, 0.14
	Linux ha-445282 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22] <==
	I0717 18:27:29.996732       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:27:40.001138       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:27:40.001250       1 main.go:303] handling current node
	I0717 18:27:40.001283       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:27:40.001302       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:27:40.001540       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:27:40.001574       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:27:40.001659       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:27:40.001679       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:27:50.001777       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:27:50.001820       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:27:50.001990       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:27:50.002025       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:27:50.002080       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:27:50.002086       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:27:50.002135       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:27:50.002155       1 main.go:303] handling current node
	I0717 18:27:59.994244       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:27:59.994470       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:27:59.994623       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:27:59.994965       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:27:59.995158       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:27:59.995212       1 main.go:303] handling current node
	I0717 18:27:59.995249       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:27:59.995274       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a] <==
	I0717 18:21:38.252099       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:21:38.373228       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 18:21:38.381360       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147]
	I0717 18:21:38.382525       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:21:38.387009       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:21:38.547630       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:21:39.651140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:21:39.662858       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 18:21:39.686588       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:21:52.652704       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:21:52.698997       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 18:24:32.833980       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54804: use of closed network connection
	E0717 18:24:33.027180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54828: use of closed network connection
	E0717 18:24:33.216008       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54850: use of closed network connection
	E0717 18:24:33.485078       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0717 18:24:33.684042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54896: use of closed network connection
	E0717 18:24:33.876765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0717 18:24:34.054624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54920: use of closed network connection
	E0717 18:24:34.234190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54930: use of closed network connection
	E0717 18:24:34.419918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54958: use of closed network connection
	E0717 18:24:34.712765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54988: use of closed network connection
	E0717 18:24:34.905198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55016: use of closed network connection
	E0717 18:24:35.077222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55038: use of closed network connection
	E0717 18:24:35.254551       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55046: use of closed network connection
	E0717 18:24:35.615640       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55082: use of closed network connection
	
	
	==> kube-controller-manager [f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b] <==
	I0717 18:24:01.118382       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-445282-m03\" does not exist"
	I0717 18:24:01.146825       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-445282-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:24:01.819643       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m03"
	I0717 18:24:28.418639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.559459ms"
	I0717 18:24:28.510178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.472106ms"
	I0717 18:24:28.669486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.238373ms"
	I0717 18:24:28.744707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.165689ms"
	I0717 18:24:28.865093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.334985ms"
	E0717 18:24:28.865122       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:24:28.865218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.906µs"
	I0717 18:24:28.870615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.303µs"
	I0717 18:24:29.594615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.228µs"
	I0717 18:24:31.072088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.684µs"
	I0717 18:24:32.273183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.868353ms"
	I0717 18:24:32.274003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.322µs"
	I0717 18:24:32.402260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.215552ms"
	I0717 18:24:32.402386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.832µs"
	E0717 18:25:05.402853       1 certificate_controller.go:146] Sync csr-gbgzk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gbgzk": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:25:05.412298       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-445282-m04\" does not exist"
	I0717 18:25:05.466296       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-445282-m04" podCIDRs=["10.244.3.0/24"]
	I0717 18:25:06.872466       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m04"
	I0717 18:25:25.867148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	I0717 18:26:18.707329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	I0717 18:26:18.874735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.86891ms"
	I0717 18:26:18.874866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.448µs"
	
	
	==> kube-proxy [ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0] <==
	I0717 18:21:54.823974       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:21:54.839345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0717 18:21:54.877596       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:21:54.877651       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:21:54.877666       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:21:54.880344       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:21:54.880665       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:21:54.880703       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:21:54.881819       1 config.go:192] "Starting service config controller"
	I0717 18:21:54.881952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:21:54.882002       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:21:54.882020       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:21:54.883767       1 config.go:319] "Starting node config controller"
	I0717 18:21:54.883806       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:21:54.982913       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:21:54.982938       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:21:54.984507       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65] <==
	W0717 18:21:36.624504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:21:36.624557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:21:37.471065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:21:37.471188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:21:37.478243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:21:37.478323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:21:37.660393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:21:37.660512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:21:37.670045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.670133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:37.831345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:21:37.831408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:21:37.832239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.832474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:37.840820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:21:37.840924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:21:37.977802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.977857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:38.130649       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:21:38.130764       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 18:21:41.385007       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 18:25:05.655243       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qltvc\": pod kube-proxy-qltvc is already assigned to node \"ha-445282-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qltvc" node="ha-445282-m04"
	E0717 18:25:05.655449       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dd75ca54-55d0-45de-ac57-6bbd0a22db78(kube-system/kube-proxy-qltvc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qltvc"
	E0717 18:25:05.655476       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qltvc\": pod kube-proxy-qltvc is already assigned to node \"ha-445282-m04\"" pod="kube-system/kube-proxy-qltvc"
	I0717 18:25:05.655503       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qltvc" node="ha-445282-m04"
	
	
	==> kubelet <==
	Jul 17 18:23:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:23:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:24:28 ha-445282 kubelet[1382]: I0717 18:24:28.413786    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-28njs" podStartSLOduration=156.413683769 podStartE2EDuration="2m36.413683769s" podCreationTimestamp="2024-07-17 18:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:22:11.851123968 +0000 UTC m=+32.404048029" watchObservedRunningTime="2024-07-17 18:24:28.413683769 +0000 UTC m=+168.966607838"
	Jul 17 18:24:28 ha-445282 kubelet[1382]: I0717 18:24:28.415184    1382 topology_manager.go:215] "Topology Admit Handler" podUID="727368ca-3135-44f6-93b1-5cfb12476236" podNamespace="default" podName="busybox-fc5497c4f-mcsw8"
	Jul 17 18:24:28 ha-445282 kubelet[1382]: I0717 18:24:28.526320    1382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xql79\" (UniqueName: \"kubernetes.io/projected/727368ca-3135-44f6-93b1-5cfb12476236-kube-api-access-xql79\") pod \"busybox-fc5497c4f-mcsw8\" (UID: \"727368ca-3135-44f6-93b1-5cfb12476236\") " pod="default/busybox-fc5497c4f-mcsw8"
	Jul 17 18:24:39 ha-445282 kubelet[1382]: E0717 18:24:39.589701    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:24:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:25:39 ha-445282 kubelet[1382]: E0717 18:25:39.588562    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:25:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:26:39 ha-445282 kubelet[1382]: E0717 18:26:39.588134    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:26:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:27:39 ha-445282 kubelet[1382]: E0717 18:27:39.588824    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:27:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-445282 -n ha-445282
helpers_test.go:261: (dbg) Run:  kubectl --context ha-445282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (3.206961195s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:11.943448  416476 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:11.943682  416476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:11.943690  416476 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:11.943694  416476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:11.943947  416476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:11.944150  416476 out.go:298] Setting JSON to false
	I0717 18:28:11.944185  416476 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:11.944299  416476 notify.go:220] Checking for updates...
	I0717 18:28:11.944631  416476 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:11.944649  416476 status.go:255] checking status of ha-445282 ...
	I0717 18:28:11.945222  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:11.945275  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:11.962782  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0717 18:28:11.963210  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:11.963807  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:11.963843  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:11.964195  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:11.964418  416476 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:11.965980  416476 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:11.965999  416476 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:11.966291  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:11.966343  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:11.982446  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39051
	I0717 18:28:11.982980  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:11.983475  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:11.983500  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:11.983958  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:11.984215  416476 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:11.986998  416476 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:11.987436  416476 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:11.987471  416476 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:11.987612  416476 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:11.987908  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:11.987946  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:12.004635  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46379
	I0717 18:28:12.005125  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:12.005636  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:12.005663  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:12.006018  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:12.006221  416476 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:12.006440  416476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:12.006473  416476 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:12.009642  416476 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:12.010087  416476 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:12.010108  416476 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:12.010358  416476 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:12.010558  416476 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:12.010784  416476 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:12.010991  416476 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:12.092659  416476 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:12.098834  416476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:12.115424  416476 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:12.115457  416476 api_server.go:166] Checking apiserver status ...
	I0717 18:28:12.115514  416476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:12.131576  416476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:12.142427  416476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:12.142480  416476 ssh_runner.go:195] Run: ls
	I0717 18:28:12.147054  416476 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:12.151192  416476 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:12.151219  416476 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:12.151232  416476 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:12.151250  416476 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:12.151652  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:12.151710  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:12.168329  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0717 18:28:12.168881  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:12.169367  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:12.169388  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:12.169676  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:12.169847  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:12.171310  416476 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:12.171330  416476 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:12.171619  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:12.171657  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:12.186574  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0717 18:28:12.187043  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:12.187480  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:12.187505  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:12.187866  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:12.188047  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:12.190636  416476 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:12.191078  416476 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:12.191119  416476 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:12.191258  416476 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:12.191673  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:12.191719  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:12.207616  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0717 18:28:12.208039  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:12.208604  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:12.208630  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:12.208942  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:12.209163  416476 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:12.209360  416476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:12.209388  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:12.212201  416476 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:12.212710  416476 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:12.212751  416476 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:12.212756  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:12.212918  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:12.213119  416476 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:12.213267  416476 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:14.744791  416476 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:14.744924  416476 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:14.744942  416476 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:14.744950  416476 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:14.744969  416476 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:14.744976  416476 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:14.745290  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.745344  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:14.761195  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0717 18:28:14.761629  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:14.762080  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:14.762099  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:14.762420  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:14.762627  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:14.764103  416476 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:14.764122  416476 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:14.764416  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.764451  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:14.779583  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0717 18:28:14.780046  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:14.780610  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:14.780640  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:14.780998  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:14.781191  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:14.783792  416476 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:14.784184  416476 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:14.784205  416476 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:14.784349  416476 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:14.784706  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.784760  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:14.800436  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0717 18:28:14.800919  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:14.801473  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:14.801495  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:14.801816  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:14.802025  416476 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:14.802224  416476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:14.802248  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:14.805246  416476 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:14.805749  416476 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:14.805774  416476 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:14.806043  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:14.806219  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:14.806364  416476 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:14.806536  416476 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:14.892001  416476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:14.909224  416476 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:14.909256  416476 api_server.go:166] Checking apiserver status ...
	I0717 18:28:14.909296  416476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:14.924315  416476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:14.935410  416476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:14.935470  416476 ssh_runner.go:195] Run: ls
	I0717 18:28:14.940522  416476 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:14.945377  416476 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:14.945407  416476 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:14.945418  416476 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:14.945441  416476 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:14.945786  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.945826  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:14.961400  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
	I0717 18:28:14.961853  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:14.962351  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:14.962377  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:14.962745  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:14.962952  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:14.964419  416476 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:14.964436  416476 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:14.964765  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.964804  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:14.980317  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0717 18:28:14.980846  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:14.981406  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:14.981428  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:14.981787  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:14.982000  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:14.984868  416476 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:14.985272  416476 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:14.985315  416476 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:14.985426  416476 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:14.985728  416476 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:14.985764  416476 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:15.001426  416476 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0717 18:28:15.001834  416476 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:15.002311  416476 main.go:141] libmachine: Using API Version  1
	I0717 18:28:15.002337  416476 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:15.002649  416476 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:15.002830  416476 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:15.003000  416476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:15.003020  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:15.005996  416476 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:15.006404  416476 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:15.006432  416476 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:15.006586  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:15.006784  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:15.006970  416476 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:15.007155  416476 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:15.091979  416476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:15.106220  416476 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (5.246842066s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:16.051533  416577 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:16.051658  416577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:16.051667  416577 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:16.051671  416577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:16.051856  416577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:16.052014  416577 out.go:298] Setting JSON to false
	I0717 18:28:16.052045  416577 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:16.052160  416577 notify.go:220] Checking for updates...
	I0717 18:28:16.052411  416577 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:16.052431  416577 status.go:255] checking status of ha-445282 ...
	I0717 18:28:16.052942  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.053004  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.068931  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34531
	I0717 18:28:16.069383  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.070007  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.070033  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.070429  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.070658  416577 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:16.072568  416577 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:16.072589  416577 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:16.072985  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.073035  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.089622  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0717 18:28:16.090163  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.090672  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.090701  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.091011  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.091214  416577 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:16.094034  416577 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:16.094416  416577 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:16.094443  416577 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:16.094588  416577 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:16.094885  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.094923  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.110292  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0717 18:28:16.110779  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.111269  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.111305  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.111627  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.111813  416577 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:16.111997  416577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:16.112028  416577 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:16.114771  416577 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:16.115230  416577 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:16.115257  416577 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:16.115436  416577 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:16.115603  416577 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:16.115757  416577 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:16.115889  416577 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:16.197756  416577 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:16.203984  416577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:16.218206  416577 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:16.218241  416577 api_server.go:166] Checking apiserver status ...
	I0717 18:28:16.218283  416577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:16.233725  416577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:16.243420  416577 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:16.243467  416577 ssh_runner.go:195] Run: ls
	I0717 18:28:16.247581  416577 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:16.253597  416577 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:16.253623  416577 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:16.253634  416577 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:16.253653  416577 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:16.253980  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.254021  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.270166  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0717 18:28:16.270615  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.271054  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.271099  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.271421  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.271616  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:16.273329  416577 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:16.273348  416577 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:16.273682  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.273730  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.288453  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35257
	I0717 18:28:16.288941  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.289448  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.289468  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.289818  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.289980  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:16.292460  416577 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:16.292913  416577 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:16.292939  416577 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:16.293087  416577 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:16.293392  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:16.293445  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:16.308692  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0717 18:28:16.309150  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:16.309699  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:16.309729  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:16.310022  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:16.310196  416577 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:16.310415  416577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:16.310443  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:16.313531  416577 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:16.314067  416577 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:16.314102  416577 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:16.314220  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:16.314423  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:16.314591  416577 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:16.314791  416577 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:17.820795  416577 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:17.820862  416577 retry.go:31] will retry after 147.990274ms: dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:20.888774  416577 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:20.888897  416577 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:20.888922  416577 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:20.888933  416577 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:20.888982  416577 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:20.888991  416577 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:20.889466  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:20.889529  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:20.904666  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0717 18:28:20.905135  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:20.905604  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:20.905628  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:20.906003  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:20.906149  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:20.907720  416577 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:20.907752  416577 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:20.908125  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:20.908162  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:20.924090  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0717 18:28:20.924590  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:20.925066  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:20.925086  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:20.925439  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:20.925607  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:20.928050  416577 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:20.928461  416577 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:20.928507  416577 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:20.928638  416577 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:20.928924  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:20.928957  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:20.943650  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0717 18:28:20.944002  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:20.944503  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:20.944526  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:20.944882  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:20.945142  416577 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:20.945336  416577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:20.945369  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:20.948370  416577 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:20.948949  416577 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:20.948981  416577 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:20.949131  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:20.949294  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:20.949465  416577 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:20.949617  416577 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:21.036047  416577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:21.053748  416577 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:21.053781  416577 api_server.go:166] Checking apiserver status ...
	I0717 18:28:21.053816  416577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:21.067300  416577 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:21.077961  416577 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:21.078024  416577 ssh_runner.go:195] Run: ls
	I0717 18:28:21.084189  416577 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:21.089596  416577 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:21.089626  416577 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:21.089637  416577 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:21.089654  416577 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:21.089974  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:21.090018  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:21.105843  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I0717 18:28:21.106325  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:21.106792  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:21.106818  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:21.107165  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:21.107381  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:21.109223  416577 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:21.109245  416577 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:21.109534  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:21.109584  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:21.125545  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0717 18:28:21.125971  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:21.126457  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:21.126480  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:21.126803  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:21.127032  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:21.129781  416577 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:21.130137  416577 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:21.130164  416577 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:21.130298  416577 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:21.130606  416577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:21.130643  416577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:21.145214  416577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0717 18:28:21.145605  416577 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:21.146060  416577 main.go:141] libmachine: Using API Version  1
	I0717 18:28:21.146081  416577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:21.146416  416577 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:21.146652  416577 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:21.146862  416577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:21.146883  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:21.149459  416577 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:21.149871  416577 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:21.149892  416577 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:21.150018  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:21.150165  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:21.150273  416577 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:21.150392  416577 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:21.236564  416577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:21.251994  416577 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (4.918793288s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:22.773733  416677 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:22.774009  416677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:22.774022  416677 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:22.774029  416677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:22.774276  416677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:22.774496  416677 out.go:298] Setting JSON to false
	I0717 18:28:22.774548  416677 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:22.774654  416677 notify.go:220] Checking for updates...
	I0717 18:28:22.775150  416677 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:22.775173  416677 status.go:255] checking status of ha-445282 ...
	I0717 18:28:22.775745  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:22.775796  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:22.797465  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0717 18:28:22.797974  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:22.798521  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:22.798550  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:22.798979  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:22.799198  416677 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:22.800660  416677 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:22.800678  416677 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:22.801037  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:22.801077  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:22.816022  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0717 18:28:22.816406  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:22.816882  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:22.816902  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:22.817227  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:22.817398  416677 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:22.820329  416677 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:22.820755  416677 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:22.820784  416677 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:22.820891  416677 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:22.821273  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:22.821332  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:22.837123  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0717 18:28:22.837566  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:22.838070  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:22.838094  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:22.838400  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:22.838594  416677 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:22.838802  416677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:22.838829  416677 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:22.841692  416677 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:22.842163  416677 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:22.842192  416677 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:22.842343  416677 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:22.842529  416677 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:22.842680  416677 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:22.842797  416677 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:22.925501  416677 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:22.931791  416677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:22.946797  416677 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:22.946828  416677 api_server.go:166] Checking apiserver status ...
	I0717 18:28:22.946865  416677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:22.964420  416677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:22.973292  416677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:22.973348  416677 ssh_runner.go:195] Run: ls
	I0717 18:28:22.977828  416677 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:22.985846  416677 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:22.985867  416677 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:22.985878  416677 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:22.985910  416677 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:22.986226  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:22.986277  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:23.001714  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0717 18:28:23.002168  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:23.002648  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:23.002671  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:23.003063  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:23.003264  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:23.004740  416677 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:23.004755  416677 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:23.005115  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:23.005158  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:23.020350  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46777
	I0717 18:28:23.020827  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:23.021347  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:23.021371  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:23.021669  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:23.021889  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:23.024816  416677 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:23.025242  416677 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:23.025274  416677 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:23.025446  416677 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:23.025879  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:23.025926  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:23.040886  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0717 18:28:23.041284  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:23.041740  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:23.041763  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:23.042075  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:23.042245  416677 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:23.042449  416677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:23.042469  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:23.045441  416677 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:23.046066  416677 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:23.046092  416677 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:23.046315  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:23.046467  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:23.046673  416677 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:23.046828  416677 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:23.960759  416677 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:23.960810  416677 retry.go:31] will retry after 271.87802ms: dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:27.288772  416677 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:27.288890  416677 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:27.288909  416677 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:27.288916  416677 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:27.288936  416677 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:27.288943  416677 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:27.289250  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.289293  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.304859  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33929
	I0717 18:28:27.305339  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.305877  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.305909  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.306220  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.306411  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:27.307872  416677 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:27.307891  416677 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:27.308317  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.308376  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.323757  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0717 18:28:27.324126  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.324587  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.324613  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.324924  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.325113  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:27.328197  416677 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:27.328675  416677 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:27.328700  416677 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:27.328880  416677 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:27.329211  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.329263  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.345363  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I0717 18:28:27.345815  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.346294  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.346322  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.346812  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.347146  416677 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:27.347384  416677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:27.347409  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:27.350676  416677 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:27.351301  416677 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:27.351324  416677 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:27.351515  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:27.351715  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:27.351881  416677 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:27.352065  416677 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:27.440007  416677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:27.454400  416677 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:27.454432  416677 api_server.go:166] Checking apiserver status ...
	I0717 18:28:27.454466  416677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:27.467858  416677 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:27.477246  416677 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:27.477310  416677 ssh_runner.go:195] Run: ls
	I0717 18:28:27.481767  416677 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:27.486064  416677 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:27.486091  416677 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:27.486100  416677 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:27.486119  416677 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:27.486440  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.486487  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.501825  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0717 18:28:27.502356  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.502931  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.502952  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.503283  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.503491  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:27.505256  416677 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:27.505280  416677 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:27.505594  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.505633  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.521179  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I0717 18:28:27.521580  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.522088  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.522113  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.522445  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.522669  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:27.525442  416677 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:27.525890  416677 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:27.525920  416677 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:27.526066  416677 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:27.526356  416677 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:27.526400  416677 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:27.542767  416677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0717 18:28:27.543192  416677 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:27.543706  416677 main.go:141] libmachine: Using API Version  1
	I0717 18:28:27.543723  416677 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:27.544021  416677 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:27.544228  416677 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:27.544403  416677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:27.544421  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:27.547108  416677 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:27.547538  416677 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:27.547564  416677 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:27.547739  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:27.547895  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:27.548063  416677 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:27.548189  416677 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:27.631602  416677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:27.646132  416677 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (3.748797857s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:30.385583  416794 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:30.385697  416794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:30.385705  416794 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:30.385709  416794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:30.385902  416794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:30.386057  416794 out.go:298] Setting JSON to false
	I0717 18:28:30.386089  416794 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:30.386139  416794 notify.go:220] Checking for updates...
	I0717 18:28:30.386594  416794 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:30.386616  416794 status.go:255] checking status of ha-445282 ...
	I0717 18:28:30.387136  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.387176  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.403062  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0717 18:28:30.403488  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.404179  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.404222  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.404608  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.404817  416794 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:30.406281  416794 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:30.406299  416794 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:30.406592  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.406630  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.422767  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0717 18:28:30.423255  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.423765  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.423785  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.424061  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.424250  416794 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:30.426804  416794 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:30.427132  416794 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:30.427152  416794 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:30.427275  416794 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:30.427588  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.427643  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.442112  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0717 18:28:30.442517  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.442963  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.442985  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.443331  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.443543  416794 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:30.443804  416794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:30.443841  416794 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:30.446584  416794 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:30.446970  416794 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:30.446997  416794 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:30.447190  416794 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:30.447367  416794 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:30.447511  416794 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:30.447638  416794 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:30.530329  416794 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:30.539424  416794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:30.559353  416794 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:30.559386  416794 api_server.go:166] Checking apiserver status ...
	I0717 18:28:30.559419  416794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:30.576763  416794 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:30.588248  416794 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:30.588310  416794 ssh_runner.go:195] Run: ls
	I0717 18:28:30.593976  416794 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:30.600418  416794 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:30.600449  416794 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:30.600459  416794 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:30.600507  416794 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:30.600900  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.600944  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.616769  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I0717 18:28:30.617258  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.617772  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.617798  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.618169  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.618377  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:30.619817  416794 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:30.619841  416794 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:30.620183  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.620225  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.635747  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0717 18:28:30.636146  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.636679  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.636708  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.637026  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.637238  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:30.640077  416794 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:30.640466  416794 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:30.640504  416794 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:30.640677  416794 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:30.640979  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:30.641022  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:30.657542  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0717 18:28:30.657958  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:30.658461  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:30.658483  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:30.658763  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:30.658941  416794 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:30.659138  416794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:30.659160  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:30.662041  416794 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:30.662424  416794 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:30.662453  416794 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:30.662610  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:30.662782  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:30.662950  416794 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:30.663082  416794 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:33.724787  416794 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:33.724914  416794 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:33.724961  416794 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:33.724976  416794 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:33.725001  416794 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:33.725015  416794 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:33.725483  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.725593  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.741079  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46853
	I0717 18:28:33.741484  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.741965  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.741990  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.742298  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.742489  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:33.743948  416794 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:33.743980  416794 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:33.744389  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.744432  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.758994  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0717 18:28:33.759381  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.759817  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.759840  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.760251  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.760449  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:33.763035  416794 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:33.763473  416794 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:33.763514  416794 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:33.763617  416794 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:33.763955  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.764007  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.778286  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0717 18:28:33.778660  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.779143  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.779163  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.779450  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.779643  416794 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:33.779819  416794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:33.779839  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:33.782440  416794 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:33.782835  416794 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:33.782854  416794 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:33.783046  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:33.783209  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:33.783370  416794 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:33.783472  416794 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:33.871320  416794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:33.887508  416794 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:33.887548  416794 api_server.go:166] Checking apiserver status ...
	I0717 18:28:33.887594  416794 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:33.904471  416794 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:33.914567  416794 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:33.914626  416794 ssh_runner.go:195] Run: ls
	I0717 18:28:33.919212  416794 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:33.926199  416794 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:33.926218  416794 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:33.926227  416794 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:33.926246  416794 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:33.926568  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.926625  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.943689  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0717 18:28:33.944089  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.944601  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.944629  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.944958  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.945199  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:33.946752  416794 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:33.946767  416794 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:33.947119  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.947158  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.961619  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39119
	I0717 18:28:33.962066  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.962540  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.962563  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.962905  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.963097  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:33.965753  416794 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:33.966136  416794 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:33.966173  416794 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:33.966282  416794 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:33.966712  416794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:33.966761  416794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:33.981235  416794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0717 18:28:33.981601  416794 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:33.982105  416794 main.go:141] libmachine: Using API Version  1
	I0717 18:28:33.982135  416794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:33.982438  416794 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:33.982635  416794 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:33.982812  416794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:33.982835  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:33.985591  416794 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:33.986035  416794 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:33.986068  416794 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:33.986209  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:33.986359  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:33.986507  416794 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:33.986607  416794 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:34.071988  416794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:34.087009  416794 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (4.678288064s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:35.918490  416895 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:35.918605  416895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:35.918616  416895 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:35.918620  416895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:35.918855  416895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:35.919085  416895 out.go:298] Setting JSON to false
	I0717 18:28:35.919125  416895 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:35.919246  416895 notify.go:220] Checking for updates...
	I0717 18:28:35.919674  416895 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:35.919699  416895 status.go:255] checking status of ha-445282 ...
	I0717 18:28:35.920334  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:35.920383  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:35.935843  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0717 18:28:35.936237  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:35.936846  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:35.936871  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:35.937316  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:35.937597  416895 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:35.939078  416895 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:35.939098  416895 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:35.939499  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:35.939547  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:35.954547  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I0717 18:28:35.954881  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:35.955330  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:35.955352  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:35.955700  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:35.955895  416895 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:35.958683  416895 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:35.959198  416895 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:35.959241  416895 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:35.959379  416895 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:35.959657  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:35.959693  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:35.974172  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0717 18:28:35.974583  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:35.975031  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:35.975061  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:35.975346  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:35.975502  416895 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:35.975677  416895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:35.975706  416895 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:35.978067  416895 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:35.978476  416895 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:35.978504  416895 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:35.978622  416895 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:35.978796  416895 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:35.978997  416895 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:35.979156  416895 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:36.059897  416895 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:36.065944  416895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:36.080089  416895 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:36.080113  416895 api_server.go:166] Checking apiserver status ...
	I0717 18:28:36.080149  416895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:36.093930  416895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:36.103378  416895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:36.103418  416895 ssh_runner.go:195] Run: ls
	I0717 18:28:36.107607  416895 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:36.112057  416895 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:36.112085  416895 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:36.112119  416895 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:36.112147  416895 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:36.112445  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:36.112527  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:36.127717  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0717 18:28:36.128224  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:36.128763  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:36.128784  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:36.129160  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:36.129481  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:36.131008  416895 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:36.131024  416895 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:36.131311  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:36.131343  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:36.145930  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0717 18:28:36.146387  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:36.146844  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:36.146869  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:36.147171  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:36.147323  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:36.149881  416895 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:36.150267  416895 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:36.150293  416895 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:36.150427  416895 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:36.150820  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:36.150864  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:36.165187  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37379
	I0717 18:28:36.165595  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:36.166088  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:36.166110  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:36.166445  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:36.166620  416895 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:36.166818  416895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:36.166842  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:36.169608  416895 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:36.170060  416895 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:36.170087  416895 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:36.170247  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:36.170402  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:36.170576  416895 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:36.170724  416895 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:36.792795  416895 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:36.792848  416895 retry.go:31] will retry after 312.02118ms: dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:40.184791  416895 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:40.184901  416895 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:40.184951  416895 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:40.184971  416895 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:40.185001  416895 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:40.185014  416895 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:40.185337  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.185421  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.202271  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0717 18:28:40.202814  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.203359  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.203390  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.203825  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.204055  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:40.205821  416895 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:40.205839  416895 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:40.206268  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.206325  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.220864  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39881
	I0717 18:28:40.221327  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.221864  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.221886  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.222181  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.222346  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:40.224853  416895 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:40.225229  416895 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:40.225255  416895 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:40.225402  416895 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:40.225836  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.225891  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.240749  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0717 18:28:40.241082  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.241510  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.241529  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.241797  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.241988  416895 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:40.242184  416895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:40.242206  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:40.244750  416895 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:40.245127  416895 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:40.245153  416895 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:40.245274  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:40.245447  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:40.245608  416895 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:40.245749  416895 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:40.332080  416895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:40.346975  416895 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:40.347010  416895 api_server.go:166] Checking apiserver status ...
	I0717 18:28:40.347053  416895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:40.363321  416895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:40.375124  416895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:40.375181  416895 ssh_runner.go:195] Run: ls
	I0717 18:28:40.379585  416895 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:40.387819  416895 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:40.387846  416895 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:40.387859  416895 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:40.387881  416895 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:40.388172  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.388216  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.403552  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0717 18:28:40.403957  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.404517  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.404540  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.404897  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.405092  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:40.406757  416895 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:40.406776  416895 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:40.407161  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.407207  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.421910  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I0717 18:28:40.422282  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.422734  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.422752  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.423097  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.423311  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:40.426130  416895 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:40.426516  416895 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:40.426548  416895 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:40.426679  416895 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:40.426977  416895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:40.427020  416895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:40.442090  416895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0717 18:28:40.442555  416895 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:40.443114  416895 main.go:141] libmachine: Using API Version  1
	I0717 18:28:40.443137  416895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:40.443486  416895 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:40.443693  416895 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:40.443881  416895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:40.443901  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:40.446851  416895 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:40.447275  416895 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:40.447300  416895 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:40.447466  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:40.447651  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:40.447803  416895 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:40.447949  416895 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:40.535631  416895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:40.550842  416895 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (3.752224939s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:28:45.713725  417011 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:45.713983  417011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:45.713991  417011 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:45.713996  417011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:45.714170  417011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:28:45.714331  417011 out.go:298] Setting JSON to false
	I0717 18:28:45.714358  417011 mustload.go:65] Loading cluster: ha-445282
	I0717 18:28:45.714415  417011 notify.go:220] Checking for updates...
	I0717 18:28:45.714872  417011 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:45.714895  417011 status.go:255] checking status of ha-445282 ...
	I0717 18:28:45.715485  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.715545  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.731910  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0717 18:28:45.732329  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.732943  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.732969  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.733344  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.733550  417011 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:28:45.735033  417011 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:28:45.735053  417011 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:45.735481  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.735529  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.751168  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
	I0717 18:28:45.751584  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.752014  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.752035  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.752319  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.752499  417011 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:28:45.755072  417011 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:45.755442  417011 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:45.755473  417011 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:45.755600  417011 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:28:45.755902  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.755941  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.770804  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0717 18:28:45.771214  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.771667  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.771689  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.772110  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.772343  417011 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:28:45.772572  417011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:45.772595  417011 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:28:45.775393  417011 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:45.775849  417011 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:28:45.775872  417011 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:28:45.776039  417011 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:28:45.776215  417011 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:28:45.776363  417011 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:28:45.776534  417011 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:28:45.856195  417011 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:45.862778  417011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:45.877687  417011 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:45.877723  417011 api_server.go:166] Checking apiserver status ...
	I0717 18:28:45.877779  417011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:45.892652  417011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:28:45.901665  417011 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:45.901700  417011 ssh_runner.go:195] Run: ls
	I0717 18:28:45.905825  417011 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:45.909971  417011 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:45.909993  417011 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:28:45.910006  417011 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:45.910031  417011 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:28:45.910350  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.910393  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.926704  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0717 18:28:45.927082  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.927559  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.927583  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.927914  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.928123  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:28:45.929809  417011 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:28:45.929828  417011 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:45.930233  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.930284  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.946506  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0717 18:28:45.946920  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.947412  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.947431  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.947745  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.947929  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:28:45.950819  417011 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:45.951259  417011 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:45.951287  417011 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:45.951355  417011 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:28:45.951660  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:45.951695  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:45.966785  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0717 18:28:45.967256  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:45.967740  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:45.967769  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:45.968036  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:45.968201  417011 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:28:45.968402  417011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:45.968421  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:28:45.970982  417011 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:45.971450  417011 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:28:45.971496  417011 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:28:45.971635  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:28:45.971804  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:28:45.971965  417011 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:28:45.972136  417011 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	W0717 18:28:49.048730  417011 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.198:22: connect: no route to host
	W0717 18:28:49.048841  417011 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	E0717 18:28:49.048858  417011 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:49.048868  417011 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:28:49.048892  417011 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.198:22: connect: no route to host
	I0717 18:28:49.048900  417011 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:28:49.049224  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.049264  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.065053  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
	I0717 18:28:49.065503  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.065955  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.065979  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.066321  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.066504  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:28:49.068310  417011 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:28:49.068331  417011 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:49.068681  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.068721  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.084022  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44871
	I0717 18:28:49.084378  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.084895  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.084916  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.085272  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.085473  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:28:49.088271  417011 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:49.088740  417011 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:49.088764  417011 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:49.088908  417011 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:28:49.089233  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.089279  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.104147  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0717 18:28:49.104508  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.104922  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.104942  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.105238  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.105438  417011 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:28:49.105640  417011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:49.105665  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:28:49.108194  417011 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:49.108611  417011 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:28:49.108633  417011 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:28:49.108790  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:28:49.108972  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:28:49.109092  417011 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:28:49.109244  417011 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:28:49.195955  417011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:49.216664  417011 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:28:49.216696  417011 api_server.go:166] Checking apiserver status ...
	I0717 18:28:49.216729  417011 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:28:49.236017  417011 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:28:49.246087  417011 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:28:49.246152  417011 ssh_runner.go:195] Run: ls
	I0717 18:28:49.250675  417011 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:28:49.255275  417011 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:28:49.255301  417011 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:28:49.255313  417011 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:28:49.255334  417011 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:28:49.255621  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.255666  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.271337  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0717 18:28:49.271795  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.272259  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.272282  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.272655  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.272819  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:28:49.274525  417011 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:28:49.274541  417011 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:49.274824  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.274865  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.290945  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0717 18:28:49.291344  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.291809  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.291835  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.292190  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.292382  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:28:49.295265  417011 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:49.295675  417011 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:49.295699  417011 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:49.295835  417011 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:28:49.296144  417011 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:28:49.296182  417011 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:28:49.312309  417011 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40083
	I0717 18:28:49.312773  417011 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:28:49.313266  417011 main.go:141] libmachine: Using API Version  1
	I0717 18:28:49.313289  417011 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:28:49.313650  417011 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:28:49.313863  417011 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:28:49.314062  417011 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:28:49.314096  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:28:49.316727  417011 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:49.317078  417011 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:28:49.317115  417011 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:28:49.317201  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:28:49.317381  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:28:49.317547  417011 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:28:49.317679  417011 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:28:49.404605  417011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:28:49.420752  417011 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 7 (634.714834ms)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:29:00.211704  417164 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:29:00.211830  417164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:00.211842  417164 out.go:304] Setting ErrFile to fd 2...
	I0717 18:29:00.211848  417164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:00.212129  417164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:29:00.212371  417164 out.go:298] Setting JSON to false
	I0717 18:29:00.212411  417164 mustload.go:65] Loading cluster: ha-445282
	I0717 18:29:00.212454  417164 notify.go:220] Checking for updates...
	I0717 18:29:00.212956  417164 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:29:00.212979  417164 status.go:255] checking status of ha-445282 ...
	I0717 18:29:00.213567  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.213616  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.228341  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39851
	I0717 18:29:00.228873  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.229525  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.229545  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.229962  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.230174  417164 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:29:00.231814  417164 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:29:00.231833  417164 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:29:00.232134  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.232180  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.248121  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0717 18:29:00.248551  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.249010  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.249031  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.249375  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.249564  417164 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:29:00.252743  417164 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:00.253150  417164 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:29:00.253172  417164 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:00.253292  417164 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:29:00.253578  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.253621  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.268998  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
	I0717 18:29:00.269487  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.269972  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.269998  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.270310  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.270485  417164 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:29:00.270741  417164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:00.270777  417164 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:29:00.273703  417164 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:00.274184  417164 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:29:00.274212  417164 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:00.274389  417164 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:29:00.274565  417164 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:29:00.274726  417164 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:29:00.274874  417164 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:29:00.360341  417164 ssh_runner.go:195] Run: systemctl --version
	I0717 18:29:00.366810  417164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:00.385911  417164 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:29:00.385951  417164 api_server.go:166] Checking apiserver status ...
	I0717 18:29:00.385994  417164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:29:00.401880  417164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:29:00.413488  417164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:29:00.413538  417164 ssh_runner.go:195] Run: ls
	I0717 18:29:00.417948  417164 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:29:00.423306  417164 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:29:00.423326  417164 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:29:00.423336  417164 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:00.423355  417164 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:29:00.423649  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.423688  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.438549  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I0717 18:29:00.438988  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.439443  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.439464  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.439785  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.439999  417164 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:29:00.441573  417164 status.go:330] ha-445282-m02 host status = "Stopped" (err=<nil>)
	I0717 18:29:00.441586  417164 status.go:343] host is not running, skipping remaining checks
	I0717 18:29:00.441593  417164 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:00.441621  417164 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:29:00.441915  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.441952  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.456579  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40173
	I0717 18:29:00.456982  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.457453  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.457478  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.457789  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.457989  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:29:00.459371  417164 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:29:00.459390  417164 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:29:00.459746  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.459781  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.474559  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43651
	I0717 18:29:00.475057  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.475683  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.475704  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.476032  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.476204  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:29:00.478666  417164 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:00.479062  417164 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:29:00.479091  417164 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:00.479237  417164 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:29:00.479533  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.479573  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.494217  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I0717 18:29:00.494607  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.495051  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.495074  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.495387  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.495571  417164 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:29:00.495776  417164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:00.495797  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:29:00.498327  417164 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:00.498721  417164 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:29:00.498742  417164 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:00.498898  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:29:00.499066  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:29:00.499215  417164 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:29:00.499339  417164 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:29:00.585237  417164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:00.602343  417164 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:29:00.602375  417164 api_server.go:166] Checking apiserver status ...
	I0717 18:29:00.602417  417164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:29:00.618545  417164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:29:00.630223  417164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:29:00.630272  417164 ssh_runner.go:195] Run: ls
	I0717 18:29:00.634924  417164 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:29:00.639394  417164 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:29:00.639419  417164 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:29:00.639431  417164 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:00.639452  417164 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:29:00.639758  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.639801  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.655693  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0717 18:29:00.656138  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.656682  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.656703  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.657045  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.657255  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:29:00.658755  417164 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:29:00.658775  417164 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:29:00.659179  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.659219  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.674142  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0717 18:29:00.674526  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.675057  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.675089  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.675389  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.675623  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:29:00.678071  417164 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:00.678484  417164 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:29:00.678508  417164 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:00.678706  417164 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:29:00.678998  417164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:00.679031  417164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:00.694005  417164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40367
	I0717 18:29:00.694423  417164 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:00.694865  417164 main.go:141] libmachine: Using API Version  1
	I0717 18:29:00.694891  417164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:00.695241  417164 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:00.695466  417164 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:29:00.695645  417164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:00.695663  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:29:00.698146  417164 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:00.698529  417164 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:29:00.698557  417164 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:00.698654  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:29:00.698829  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:29:00.699011  417164 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:29:00.699166  417164 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:29:00.783683  417164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:00.798358  417164 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 7 (651.676191ms)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-445282-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:29:10.056755  417270 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:29:10.057067  417270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:10.057079  417270 out.go:304] Setting ErrFile to fd 2...
	I0717 18:29:10.057083  417270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:10.057329  417270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:29:10.057539  417270 out.go:298] Setting JSON to false
	I0717 18:29:10.057578  417270 mustload.go:65] Loading cluster: ha-445282
	I0717 18:29:10.057678  417270 notify.go:220] Checking for updates...
	I0717 18:29:10.058063  417270 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:29:10.058082  417270 status.go:255] checking status of ha-445282 ...
	I0717 18:29:10.058565  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.058646  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.077100  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I0717 18:29:10.077620  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.078198  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.078220  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.078634  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.078887  417270 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:29:10.080641  417270 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:29:10.080664  417270 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:29:10.080985  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.081030  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.095807  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0717 18:29:10.096267  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.096780  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.096832  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.097194  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.097395  417270 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:29:10.100070  417270 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:10.100425  417270 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:29:10.100454  417270 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:10.100601  417270 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:29:10.100938  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.100973  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.116379  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0717 18:29:10.116827  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.117355  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.117376  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.117755  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.118006  417270 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:29:10.118211  417270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:10.118257  417270 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:29:10.121407  417270 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:10.121754  417270 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:29:10.121786  417270 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:29:10.121886  417270 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:29:10.122070  417270 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:29:10.122252  417270 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:29:10.122410  417270 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:29:10.205102  417270 ssh_runner.go:195] Run: systemctl --version
	I0717 18:29:10.211517  417270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:10.226795  417270 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:29:10.226836  417270 api_server.go:166] Checking apiserver status ...
	I0717 18:29:10.226897  417270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:29:10.245921  417270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W0717 18:29:10.259570  417270 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:29:10.259637  417270 ssh_runner.go:195] Run: ls
	I0717 18:29:10.264985  417270 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:29:10.271005  417270 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:29:10.271035  417270 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:29:10.271047  417270 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:10.271064  417270 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:29:10.271455  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.271519  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.289734  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0717 18:29:10.290207  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.290714  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.290736  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.291144  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.291391  417270 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:29:10.293146  417270 status.go:330] ha-445282-m02 host status = "Stopped" (err=<nil>)
	I0717 18:29:10.293162  417270 status.go:343] host is not running, skipping remaining checks
	I0717 18:29:10.293182  417270 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:10.293200  417270 status.go:255] checking status of ha-445282-m03 ...
	I0717 18:29:10.293541  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.293589  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.308919  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0717 18:29:10.309408  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.310035  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.310077  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.310462  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.310663  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:29:10.312298  417270 status.go:330] ha-445282-m03 host status = "Running" (err=<nil>)
	I0717 18:29:10.312315  417270 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:29:10.312733  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.312788  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.327746  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I0717 18:29:10.328267  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.328766  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.328790  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.329179  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.329388  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:29:10.332072  417270 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:10.332439  417270 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:29:10.332458  417270 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:10.332607  417270 host.go:66] Checking if "ha-445282-m03" exists ...
	I0717 18:29:10.332917  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.332959  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.347942  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I0717 18:29:10.348383  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.348891  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.348913  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.349273  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.349507  417270 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:29:10.349692  417270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:10.349713  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:29:10.352666  417270 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:10.353094  417270 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:29:10.353117  417270 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:10.353281  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:29:10.353499  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:29:10.353664  417270 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:29:10.353871  417270 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:29:10.442236  417270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:10.459011  417270 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:29:10.459049  417270 api_server.go:166] Checking apiserver status ...
	I0717 18:29:10.459107  417270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:29:10.473543  417270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup
	W0717 18:29:10.483581  417270 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1535/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:29:10.483651  417270 ssh_runner.go:195] Run: ls
	I0717 18:29:10.488569  417270 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:29:10.494049  417270 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:29:10.494069  417270 status.go:422] ha-445282-m03 apiserver status = Running (err=<nil>)
	I0717 18:29:10.494077  417270 status.go:257] ha-445282-m03 status: &{Name:ha-445282-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:29:10.494094  417270 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:29:10.494377  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.494414  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.510264  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0717 18:29:10.510692  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.511212  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.511241  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.511597  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.511831  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:29:10.513484  417270 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:29:10.513500  417270 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:29:10.513818  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.513855  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.529593  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40253
	I0717 18:29:10.530020  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.530518  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.530544  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.530872  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.531119  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:29:10.534106  417270 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:10.534548  417270 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:29:10.534587  417270 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:10.534756  417270 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:29:10.535140  417270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:10.535194  417270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:10.551546  417270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34137
	I0717 18:29:10.551959  417270 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:10.552438  417270 main.go:141] libmachine: Using API Version  1
	I0717 18:29:10.552461  417270 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:10.552784  417270 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:10.552969  417270 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:29:10.553158  417270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:29:10.553175  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:29:10.555894  417270 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:10.556383  417270 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:29:10.556412  417270 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:10.556567  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:29:10.556731  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:29:10.556903  417270 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:29:10.557071  417270 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:29:10.643924  417270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:29:10.659268  417270 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-445282 -n ha-445282
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-445282 logs -n 25: (1.427038243s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m03_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m04 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp testdata/cp-test.txt                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m04_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03:/home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m03 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-445282 node stop m02 -v=7                                                     | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-445282 node start m02 -v=7                                                    | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:20:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:20:57.436165  411620 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:20:57.436283  411620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:57.436291  411620 out.go:304] Setting ErrFile to fd 2...
	I0717 18:20:57.436295  411620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:57.436465  411620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:20:57.437064  411620 out.go:298] Setting JSON to false
	I0717 18:20:57.437983  411620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7400,"bootTime":1721233057,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:20:57.438039  411620 start.go:139] virtualization: kvm guest
	I0717 18:20:57.440089  411620 out.go:177] * [ha-445282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:20:57.441619  411620 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:20:57.441693  411620 notify.go:220] Checking for updates...
	I0717 18:20:57.444079  411620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:20:57.445236  411620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:20:57.446353  411620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:57.447579  411620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:20:57.448901  411620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:20:57.450255  411620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:20:57.483939  411620 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:20:57.485210  411620 start.go:297] selected driver: kvm2
	I0717 18:20:57.485228  411620 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:20:57.485240  411620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:20:57.485865  411620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:20:57.485961  411620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:20:57.500703  411620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:20:57.500759  411620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:20:57.501060  411620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:20:57.501137  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:20:57.501149  411620 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 18:20:57.501157  411620 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 18:20:57.501223  411620 start.go:340] cluster config:
	{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 18:20:57.501315  411620 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:20:57.503126  411620 out.go:177] * Starting "ha-445282" primary control-plane node in "ha-445282" cluster
	I0717 18:20:57.504244  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:20:57.504283  411620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:20:57.504293  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:20:57.504386  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:20:57.504412  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:20:57.504751  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:20:57.504776  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json: {Name:mk3c4fde3e4f65735bd71ffe5ec31a71e72453f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:20:57.504962  411620 start.go:360] acquireMachinesLock for ha-445282: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:20:57.505003  411620 start.go:364] duration metric: took 20.55µs to acquireMachinesLock for "ha-445282"
	I0717 18:20:57.505026  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:20:57.505087  411620 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:20:57.506715  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:20:57.506867  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:57.506916  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:57.522000  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0717 18:20:57.522438  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:57.523017  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:20:57.523038  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:57.523362  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:57.523549  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:20:57.523707  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:20:57.523861  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:20:57.523892  411620 client.go:168] LocalClient.Create starting
	I0717 18:20:57.523931  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:20:57.523983  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:20:57.523997  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:20:57.524050  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:20:57.524068  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:20:57.524081  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:20:57.524097  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:20:57.524115  411620 main.go:141] libmachine: (ha-445282) Calling .PreCreateCheck
	I0717 18:20:57.524459  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:20:57.524871  411620 main.go:141] libmachine: Creating machine...
	I0717 18:20:57.524890  411620 main.go:141] libmachine: (ha-445282) Calling .Create
	I0717 18:20:57.524998  411620 main.go:141] libmachine: (ha-445282) Creating KVM machine...
	I0717 18:20:57.526540  411620 main.go:141] libmachine: (ha-445282) DBG | found existing default KVM network
	I0717 18:20:57.527290  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.527160  411643 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0717 18:20:57.527318  411620 main.go:141] libmachine: (ha-445282) DBG | created network xml: 
	I0717 18:20:57.527330  411620 main.go:141] libmachine: (ha-445282) DBG | <network>
	I0717 18:20:57.527343  411620 main.go:141] libmachine: (ha-445282) DBG |   <name>mk-ha-445282</name>
	I0717 18:20:57.527368  411620 main.go:141] libmachine: (ha-445282) DBG |   <dns enable='no'/>
	I0717 18:20:57.527381  411620 main.go:141] libmachine: (ha-445282) DBG |   
	I0717 18:20:57.527387  411620 main.go:141] libmachine: (ha-445282) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 18:20:57.527392  411620 main.go:141] libmachine: (ha-445282) DBG |     <dhcp>
	I0717 18:20:57.527397  411620 main.go:141] libmachine: (ha-445282) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 18:20:57.527412  411620 main.go:141] libmachine: (ha-445282) DBG |     </dhcp>
	I0717 18:20:57.527416  411620 main.go:141] libmachine: (ha-445282) DBG |   </ip>
	I0717 18:20:57.527421  411620 main.go:141] libmachine: (ha-445282) DBG |   
	I0717 18:20:57.527428  411620 main.go:141] libmachine: (ha-445282) DBG | </network>
	I0717 18:20:57.527433  411620 main.go:141] libmachine: (ha-445282) DBG | 
	I0717 18:20:57.532943  411620 main.go:141] libmachine: (ha-445282) DBG | trying to create private KVM network mk-ha-445282 192.168.39.0/24...
	I0717 18:20:57.598596  411620 main.go:141] libmachine: (ha-445282) DBG | private KVM network mk-ha-445282 192.168.39.0/24 created
	I0717 18:20:57.598652  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.598558  411643 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:57.598666  411620 main.go:141] libmachine: (ha-445282) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 ...
	I0717 18:20:57.598689  411620 main.go:141] libmachine: (ha-445282) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:20:57.598749  411620 main.go:141] libmachine: (ha-445282) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:20:57.861831  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:57.861709  411643 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa...
	I0717 18:20:58.033735  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:58.033596  411643 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/ha-445282.rawdisk...
	I0717 18:20:58.033779  411620 main.go:141] libmachine: (ha-445282) DBG | Writing magic tar header
	I0717 18:20:58.033790  411620 main.go:141] libmachine: (ha-445282) DBG | Writing SSH key tar header
	I0717 18:20:58.033798  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:58.033716  411643 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 ...
	I0717 18:20:58.033888  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282
	I0717 18:20:58.033909  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:20:58.033935  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282 (perms=drwx------)
	I0717 18:20:58.033950  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:20:58.033965  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:20:58.033981  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:20:58.034001  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:58.034014  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:20:58.034027  411620 main.go:141] libmachine: (ha-445282) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:20:58.034036  411620 main.go:141] libmachine: (ha-445282) Creating domain...
	I0717 18:20:58.034051  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:20:58.034064  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:20:58.034081  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:20:58.034093  411620 main.go:141] libmachine: (ha-445282) DBG | Checking permissions on dir: /home
	I0717 18:20:58.034101  411620 main.go:141] libmachine: (ha-445282) DBG | Skipping /home - not owner
	I0717 18:20:58.035132  411620 main.go:141] libmachine: (ha-445282) define libvirt domain using xml: 
	I0717 18:20:58.035152  411620 main.go:141] libmachine: (ha-445282) <domain type='kvm'>
	I0717 18:20:58.035159  411620 main.go:141] libmachine: (ha-445282)   <name>ha-445282</name>
	I0717 18:20:58.035164  411620 main.go:141] libmachine: (ha-445282)   <memory unit='MiB'>2200</memory>
	I0717 18:20:58.035208  411620 main.go:141] libmachine: (ha-445282)   <vcpu>2</vcpu>
	I0717 18:20:58.035237  411620 main.go:141] libmachine: (ha-445282)   <features>
	I0717 18:20:58.035267  411620 main.go:141] libmachine: (ha-445282)     <acpi/>
	I0717 18:20:58.035287  411620 main.go:141] libmachine: (ha-445282)     <apic/>
	I0717 18:20:58.035297  411620 main.go:141] libmachine: (ha-445282)     <pae/>
	I0717 18:20:58.035318  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035331  411620 main.go:141] libmachine: (ha-445282)   </features>
	I0717 18:20:58.035343  411620 main.go:141] libmachine: (ha-445282)   <cpu mode='host-passthrough'>
	I0717 18:20:58.035354  411620 main.go:141] libmachine: (ha-445282)   
	I0717 18:20:58.035369  411620 main.go:141] libmachine: (ha-445282)   </cpu>
	I0717 18:20:58.035380  411620 main.go:141] libmachine: (ha-445282)   <os>
	I0717 18:20:58.035390  411620 main.go:141] libmachine: (ha-445282)     <type>hvm</type>
	I0717 18:20:58.035400  411620 main.go:141] libmachine: (ha-445282)     <boot dev='cdrom'/>
	I0717 18:20:58.035411  411620 main.go:141] libmachine: (ha-445282)     <boot dev='hd'/>
	I0717 18:20:58.035421  411620 main.go:141] libmachine: (ha-445282)     <bootmenu enable='no'/>
	I0717 18:20:58.035430  411620 main.go:141] libmachine: (ha-445282)   </os>
	I0717 18:20:58.035441  411620 main.go:141] libmachine: (ha-445282)   <devices>
	I0717 18:20:58.035454  411620 main.go:141] libmachine: (ha-445282)     <disk type='file' device='cdrom'>
	I0717 18:20:58.035463  411620 main.go:141] libmachine: (ha-445282)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/boot2docker.iso'/>
	I0717 18:20:58.035471  411620 main.go:141] libmachine: (ha-445282)       <target dev='hdc' bus='scsi'/>
	I0717 18:20:58.035495  411620 main.go:141] libmachine: (ha-445282)       <readonly/>
	I0717 18:20:58.035511  411620 main.go:141] libmachine: (ha-445282)     </disk>
	I0717 18:20:58.035538  411620 main.go:141] libmachine: (ha-445282)     <disk type='file' device='disk'>
	I0717 18:20:58.035556  411620 main.go:141] libmachine: (ha-445282)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:20:58.035570  411620 main.go:141] libmachine: (ha-445282)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/ha-445282.rawdisk'/>
	I0717 18:20:58.035581  411620 main.go:141] libmachine: (ha-445282)       <target dev='hda' bus='virtio'/>
	I0717 18:20:58.035605  411620 main.go:141] libmachine: (ha-445282)     </disk>
	I0717 18:20:58.035617  411620 main.go:141] libmachine: (ha-445282)     <interface type='network'>
	I0717 18:20:58.035628  411620 main.go:141] libmachine: (ha-445282)       <source network='mk-ha-445282'/>
	I0717 18:20:58.035637  411620 main.go:141] libmachine: (ha-445282)       <model type='virtio'/>
	I0717 18:20:58.035660  411620 main.go:141] libmachine: (ha-445282)     </interface>
	I0717 18:20:58.035676  411620 main.go:141] libmachine: (ha-445282)     <interface type='network'>
	I0717 18:20:58.035697  411620 main.go:141] libmachine: (ha-445282)       <source network='default'/>
	I0717 18:20:58.035720  411620 main.go:141] libmachine: (ha-445282)       <model type='virtio'/>
	I0717 18:20:58.035734  411620 main.go:141] libmachine: (ha-445282)     </interface>
	I0717 18:20:58.035746  411620 main.go:141] libmachine: (ha-445282)     <serial type='pty'>
	I0717 18:20:58.035760  411620 main.go:141] libmachine: (ha-445282)       <target port='0'/>
	I0717 18:20:58.035772  411620 main.go:141] libmachine: (ha-445282)     </serial>
	I0717 18:20:58.035811  411620 main.go:141] libmachine: (ha-445282)     <console type='pty'>
	I0717 18:20:58.035829  411620 main.go:141] libmachine: (ha-445282)       <target type='serial' port='0'/>
	I0717 18:20:58.035845  411620 main.go:141] libmachine: (ha-445282)     </console>
	I0717 18:20:58.035858  411620 main.go:141] libmachine: (ha-445282)     <rng model='virtio'>
	I0717 18:20:58.035870  411620 main.go:141] libmachine: (ha-445282)       <backend model='random'>/dev/random</backend>
	I0717 18:20:58.035881  411620 main.go:141] libmachine: (ha-445282)     </rng>
	I0717 18:20:58.035891  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035899  411620 main.go:141] libmachine: (ha-445282)     
	I0717 18:20:58.035916  411620 main.go:141] libmachine: (ha-445282)   </devices>
	I0717 18:20:58.035928  411620 main.go:141] libmachine: (ha-445282) </domain>
	I0717 18:20:58.035948  411620 main.go:141] libmachine: (ha-445282) 
	I0717 18:20:58.040261  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:b8:ed:24 in network default
	I0717 18:20:58.040842  411620 main.go:141] libmachine: (ha-445282) Ensuring networks are active...
	I0717 18:20:58.040869  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:58.041542  411620 main.go:141] libmachine: (ha-445282) Ensuring network default is active
	I0717 18:20:58.041832  411620 main.go:141] libmachine: (ha-445282) Ensuring network mk-ha-445282 is active
	I0717 18:20:58.042308  411620 main.go:141] libmachine: (ha-445282) Getting domain xml...
	I0717 18:20:58.043039  411620 main.go:141] libmachine: (ha-445282) Creating domain...
	I0717 18:20:59.220885  411620 main.go:141] libmachine: (ha-445282) Waiting to get IP...
	I0717 18:20:59.221617  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.221956  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.222014  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.221959  411643 retry.go:31] will retry after 202.848571ms: waiting for machine to come up
	I0717 18:20:59.426397  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.426991  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.427014  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.426935  411643 retry.go:31] will retry after 305.888058ms: waiting for machine to come up
	I0717 18:20:59.734533  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:20:59.734978  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:20:59.735008  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:20:59.734919  411643 retry.go:31] will retry after 311.867851ms: waiting for machine to come up
	I0717 18:21:00.048631  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:00.049063  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:00.049084  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:00.049036  411643 retry.go:31] will retry after 590.611781ms: waiting for machine to come up
	I0717 18:21:00.640804  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:00.641354  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:00.641387  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:00.641305  411643 retry.go:31] will retry after 624.757031ms: waiting for machine to come up
	I0717 18:21:01.268174  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:01.268594  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:01.268619  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:01.268568  411643 retry.go:31] will retry after 602.906786ms: waiting for machine to come up
	I0717 18:21:01.873404  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:01.873843  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:01.873899  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:01.873797  411643 retry.go:31] will retry after 982.323542ms: waiting for machine to come up
	I0717 18:21:02.857484  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:02.857871  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:02.857905  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:02.857809  411643 retry.go:31] will retry after 1.327628548s: waiting for machine to come up
	I0717 18:21:04.187336  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:04.187719  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:04.187749  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:04.187671  411643 retry.go:31] will retry after 1.147670985s: waiting for machine to come up
	I0717 18:21:05.336932  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:05.337324  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:05.337356  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:05.337280  411643 retry.go:31] will retry after 1.65527994s: waiting for machine to come up
	I0717 18:21:06.993944  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:06.994349  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:06.994371  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:06.994320  411643 retry.go:31] will retry after 2.692639352s: waiting for machine to come up
	I0717 18:21:09.689766  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:09.690211  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:09.690244  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:09.690157  411643 retry.go:31] will retry after 3.508073211s: waiting for machine to come up
	I0717 18:21:13.199436  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:13.199915  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find current IP address of domain ha-445282 in network mk-ha-445282
	I0717 18:21:13.199940  411620 main.go:141] libmachine: (ha-445282) DBG | I0717 18:21:13.199876  411643 retry.go:31] will retry after 4.513256721s: waiting for machine to come up
	I0717 18:21:17.714267  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.714651  411620 main.go:141] libmachine: (ha-445282) Found IP for machine: 192.168.39.147
	I0717 18:21:17.714674  411620 main.go:141] libmachine: (ha-445282) Reserving static IP address...
	I0717 18:21:17.714687  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has current primary IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.715022  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find host DHCP lease matching {name: "ha-445282", mac: "52:54:00:1e:00:89", ip: "192.168.39.147"} in network mk-ha-445282
	I0717 18:21:17.785335  411620 main.go:141] libmachine: (ha-445282) DBG | Getting to WaitForSSH function...
	I0717 18:21:17.785369  411620 main.go:141] libmachine: (ha-445282) Reserved static IP address: 192.168.39.147
	I0717 18:21:17.785382  411620 main.go:141] libmachine: (ha-445282) Waiting for SSH to be available...
	I0717 18:21:17.788027  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:17.788426  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282
	I0717 18:21:17.788454  411620 main.go:141] libmachine: (ha-445282) DBG | unable to find defined IP address of network mk-ha-445282 interface with MAC address 52:54:00:1e:00:89
	I0717 18:21:17.788641  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH client type: external
	I0717 18:21:17.788665  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa (-rw-------)
	I0717 18:21:17.788701  411620 main.go:141] libmachine: (ha-445282) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:21:17.788718  411620 main.go:141] libmachine: (ha-445282) DBG | About to run SSH command:
	I0717 18:21:17.788731  411620 main.go:141] libmachine: (ha-445282) DBG | exit 0
	I0717 18:21:17.792256  411620 main.go:141] libmachine: (ha-445282) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:21:17.792281  411620 main.go:141] libmachine: (ha-445282) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:21:17.792291  411620 main.go:141] libmachine: (ha-445282) DBG | command : exit 0
	I0717 18:21:17.792297  411620 main.go:141] libmachine: (ha-445282) DBG | err     : exit status 255
	I0717 18:21:17.792307  411620 main.go:141] libmachine: (ha-445282) DBG | output  : 
	I0717 18:21:20.792509  411620 main.go:141] libmachine: (ha-445282) DBG | Getting to WaitForSSH function...
	I0717 18:21:20.794941  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.795337  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:20.795365  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.795500  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH client type: external
	I0717 18:21:20.795544  411620 main.go:141] libmachine: (ha-445282) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa (-rw-------)
	I0717 18:21:20.795571  411620 main.go:141] libmachine: (ha-445282) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:21:20.795623  411620 main.go:141] libmachine: (ha-445282) DBG | About to run SSH command:
	I0717 18:21:20.795648  411620 main.go:141] libmachine: (ha-445282) DBG | exit 0
	I0717 18:21:20.920319  411620 main.go:141] libmachine: (ha-445282) DBG | SSH cmd err, output: <nil>: 
	I0717 18:21:20.920551  411620 main.go:141] libmachine: (ha-445282) KVM machine creation complete!
	I0717 18:21:20.920977  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:21:20.921496  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:20.921689  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:20.921921  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:21:20.921935  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:20.923205  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:21:20.923219  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:21:20.923224  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:21:20.923230  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:20.925849  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.926217  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:20.926241  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:20.926394  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:20.926578  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:20.926752  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:20.926884  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:20.927072  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:20.927266  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:20.927278  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:21:21.027676  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:21:21.027706  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:21:21.027715  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.030452  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.030749  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.030781  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.030969  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.031148  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.031277  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.031362  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.031490  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.031677  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.031692  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:21:21.137271  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:21:21.137364  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:21:21.137371  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:21:21.137381  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.137638  411620 buildroot.go:166] provisioning hostname "ha-445282"
	I0717 18:21:21.137672  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.137879  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.140437  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.140826  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.140850  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.140999  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.141215  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.141377  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.141501  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.141700  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.141925  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.141942  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282 && echo "ha-445282" | sudo tee /etc/hostname
	I0717 18:21:21.262858  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:21:21.262889  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.265383  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.265781  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.265812  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.265956  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.266165  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.266344  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.266509  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.266698  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.266900  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.266922  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:21:21.377228  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:21:21.377262  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:21:21.377297  411620 buildroot.go:174] setting up certificates
	I0717 18:21:21.377310  411620 provision.go:84] configureAuth start
	I0717 18:21:21.377328  411620 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:21:21.377673  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:21.380125  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.380419  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.380499  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.380561  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.382442  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.382734  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.382764  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.382864  411620 provision.go:143] copyHostCerts
	I0717 18:21:21.382908  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:21:21.382946  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:21:21.382959  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:21:21.383040  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:21:21.383149  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:21:21.383177  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:21:21.383184  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:21:21.383224  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:21:21.383288  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:21:21.383315  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:21:21.383324  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:21:21.383356  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:21:21.383460  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282 san=[127.0.0.1 192.168.39.147 ha-445282 localhost minikube]
	I0717 18:21:21.666961  411620 provision.go:177] copyRemoteCerts
	I0717 18:21:21.667030  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:21:21.667061  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.669819  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.670087  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.670117  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.670229  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.670442  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.670567  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.670665  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:21.750463  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:21:21.750539  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:21:21.791481  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:21:21.791553  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 18:21:21.814568  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:21:21.814639  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:21:21.836914  411620 provision.go:87] duration metric: took 459.58856ms to configureAuth
	I0717 18:21:21.836947  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:21:21.837116  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:21.837209  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:21.839775  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.840118  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:21.840148  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:21.840324  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:21.840645  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.840801  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:21.840968  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:21.841171  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:21.841345  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:21.841362  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:21:22.095838  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:21:22.095869  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:21:22.095877  411620 main.go:141] libmachine: (ha-445282) Calling .GetURL
	I0717 18:21:22.097344  411620 main.go:141] libmachine: (ha-445282) DBG | Using libvirt version 6000000
	I0717 18:21:22.099561  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.099938  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.099955  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.100220  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:21:22.100231  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:21:22.100240  411620 client.go:171] duration metric: took 24.576338191s to LocalClient.Create
	I0717 18:21:22.100265  411620 start.go:167] duration metric: took 24.576406812s to libmachine.API.Create "ha-445282"
	I0717 18:21:22.100275  411620 start.go:293] postStartSetup for "ha-445282" (driver="kvm2")
	I0717 18:21:22.100285  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:21:22.100303  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.100596  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:21:22.100649  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.102940  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.103269  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.103300  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.103402  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.103602  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.103793  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.103946  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.186915  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:21:22.191005  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:21:22.191036  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:21:22.191108  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:21:22.191183  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:21:22.191193  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:21:22.191282  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:21:22.200340  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:21:22.223193  411620 start.go:296] duration metric: took 122.904606ms for postStartSetup
	I0717 18:21:22.223247  411620 main.go:141] libmachine: (ha-445282) Calling .GetConfigRaw
	I0717 18:21:22.223792  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:22.226406  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.226771  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.226811  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.227024  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:22.227223  411620 start.go:128] duration metric: took 24.722125053s to createHost
	I0717 18:21:22.227248  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.229579  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.229940  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.229976  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.230092  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.230312  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.230451  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.230635  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.230783  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:21:22.230967  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:21:22.230980  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:21:22.332986  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240482.301507503
	
	I0717 18:21:22.333009  411620 fix.go:216] guest clock: 1721240482.301507503
	I0717 18:21:22.333016  411620 fix.go:229] Guest: 2024-07-17 18:21:22.301507503 +0000 UTC Remote: 2024-07-17 18:21:22.227234993 +0000 UTC m=+24.826912968 (delta=74.27251ms)
	I0717 18:21:22.333036  411620 fix.go:200] guest clock delta is within tolerance: 74.27251ms
	I0717 18:21:22.333041  411620 start.go:83] releasing machines lock for "ha-445282", held for 24.828027677s
	I0717 18:21:22.333060  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.333328  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:22.335990  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.336328  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.336359  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.336526  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337023  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337180  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:22.337256  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:21:22.337314  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.337395  411620 ssh_runner.go:195] Run: cat /version.json
	I0717 18:21:22.337417  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:22.339892  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340114  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340199  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.340224  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340351  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.340472  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:22.340499  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.340519  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:22.340629  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.340697  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:22.340779  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.340860  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:22.340974  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:22.341105  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:22.417364  411620 ssh_runner.go:195] Run: systemctl --version
	I0717 18:21:22.439526  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:21:22.594413  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:21:22.600293  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:21:22.600368  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:21:22.617016  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:21:22.617034  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:21:22.617090  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:21:22.635011  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:21:22.649175  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:21:22.649231  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:21:22.663527  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:21:22.677441  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:21:22.785761  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:21:22.943623  411620 docker.go:233] disabling docker service ...
	I0717 18:21:22.943707  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:21:22.958320  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:21:22.971036  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:21:23.098713  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:21:23.217720  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:21:23.231673  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:21:23.249150  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:21:23.249232  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.259442  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:21:23.259510  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.270000  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.280540  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.290803  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:21:23.301859  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.312222  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.328710  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:21:23.338990  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:21:23.348295  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:21:23.348340  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:21:23.361109  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:21:23.370165  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:21:23.490594  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:21:23.620993  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:21:23.621061  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:21:23.625679  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:21:23.625725  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:21:23.629274  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:21:23.672452  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:21:23.672588  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:21:23.700008  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:21:23.728007  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:21:23.729368  411620 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:21:23.732093  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:23.732516  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:23.732545  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:23.732767  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:21:23.736607  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:21:23.749270  411620 kubeadm.go:883] updating cluster {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:21:23.749370  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:21:23.749412  411620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:21:23.780984  411620 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:21:23.781048  411620 ssh_runner.go:195] Run: which lz4
	I0717 18:21:23.784708  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 18:21:23.784783  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:21:23.788825  411620 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:21:23.788850  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:21:25.116307  411620 crio.go:462] duration metric: took 1.331540851s to copy over tarball
	I0717 18:21:25.116375  411620 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:21:27.178175  411620 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061761749s)
	I0717 18:21:27.178212  411620 crio.go:469] duration metric: took 2.061875001s to extract the tarball
	I0717 18:21:27.178223  411620 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:21:27.214992  411620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:21:27.256698  411620 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:21:27.256720  411620 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:21:27.256729  411620 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.2 crio true true} ...
	I0717 18:21:27.256851  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:21:27.256921  411620 ssh_runner.go:195] Run: crio config
	I0717 18:21:27.304125  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:21:27.304149  411620 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 18:21:27.304167  411620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:21:27.304190  411620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-445282 NodeName:ha-445282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:21:27.304315  411620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-445282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:21:27.304339  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:21:27.304382  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:21:27.322496  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:21:27.322634  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:21:27.322721  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:21:27.332415  411620 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:21:27.332494  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 18:21:27.341790  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 18:21:27.358131  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:21:27.375089  411620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 18:21:27.391904  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 18:21:27.408237  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:21:27.412061  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:21:27.423919  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:21:27.535667  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:21:27.553931  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.147
	I0717 18:21:27.553956  411620 certs.go:194] generating shared ca certs ...
	I0717 18:21:27.553990  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.554163  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:21:27.554203  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:21:27.554215  411620 certs.go:256] generating profile certs ...
	I0717 18:21:27.554275  411620 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:21:27.554289  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt with IP's: []
	I0717 18:21:27.887939  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt ...
	I0717 18:21:27.887977  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt: {Name:mk848572ed450a3c0e854a18c6d204c6a1ba57ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.888171  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key ...
	I0717 18:21:27.888183  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key: {Name:mk7325569a4e28ec58a5018d73ce806286c4b119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.888268  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3
	I0717 18:21:27.888296  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.254]
	I0717 18:21:27.962908  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 ...
	I0717 18:21:27.962942  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3: {Name:mkb0a3a35931d3a052f3a164e025c02dd7779027 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.963108  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3 ...
	I0717 18:21:27.963120  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3: {Name:mk654e9c64fa1f1fd4c12efd7fb99ccb75cfcd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:27.963196  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.17e1a0f3 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:21:27.963288  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.17e1a0f3 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:21:27.963347  411620 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:21:27.963361  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt with IP's: []
	I0717 18:21:28.111633  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt ...
	I0717 18:21:28.111665  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt: {Name:mkac7c5f45728ceef72617ed8d12521e601336b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:28.111822  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key ...
	I0717 18:21:28.111832  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key: {Name:mkf06ba1e36571cdd5d188ac594df13edd4b234f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:28.111904  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:21:28.111920  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:21:28.111932  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:21:28.111944  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:21:28.111968  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:21:28.111980  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:21:28.111993  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:21:28.112004  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:21:28.112060  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:21:28.112097  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:21:28.112107  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:21:28.112125  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:21:28.112154  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:21:28.112175  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:21:28.112212  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:21:28.112244  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.112257  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.112266  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.112946  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:21:28.139960  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:21:28.171326  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:21:28.200748  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:21:28.223571  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:21:28.246441  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:21:28.269964  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:21:28.296771  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:21:28.331660  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:21:28.359733  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:21:28.382617  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:21:28.409625  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:21:28.425228  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:21:28.430978  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:21:28.441324  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.445653  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.445703  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:21:28.451339  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:21:28.461307  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:21:28.471308  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.475698  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.475750  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:21:28.481428  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:21:28.491354  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:21:28.501324  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.505653  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.505702  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:21:28.511073  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:21:28.521506  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:21:28.525658  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:21:28.525708  411620 kubeadm.go:392] StartCluster: {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:21:28.525783  411620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:21:28.525839  411620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:21:28.561156  411620 cri.go:89] found id: ""
	I0717 18:21:28.561239  411620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:21:28.570672  411620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:21:28.582211  411620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:21:28.592945  411620 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:21:28.592963  411620 kubeadm.go:157] found existing configuration files:
	
	I0717 18:21:28.593005  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:21:28.601621  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:21:28.601667  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:21:28.611729  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:21:28.622365  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:21:28.622430  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:21:28.632965  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:21:28.642575  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:21:28.642830  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:21:28.652364  411620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:21:28.661729  411620 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:21:28.661771  411620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:21:28.670865  411620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:21:28.789289  411620 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:21:28.789415  411620 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:21:28.908696  411620 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:21:28.908844  411620 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:21:28.908961  411620 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:21:29.111539  411620 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:21:29.171105  411620 out.go:204]   - Generating certificates and keys ...
	I0717 18:21:29.171275  411620 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:21:29.171391  411620 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:21:29.485093  411620 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:21:29.546994  411620 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:21:29.614542  411620 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:21:29.991363  411620 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:21:30.142705  411620 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:21:30.143042  411620 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-445282 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0717 18:21:30.497273  411620 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:21:30.497682  411620 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-445282 localhost] and IPs [192.168.39.147 127.0.0.1 ::1]
	I0717 18:21:30.632333  411620 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:21:30.891004  411620 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:21:31.045778  411620 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:21:31.046110  411620 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:21:31.361884  411620 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:21:31.426103  411620 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:21:31.532864  411620 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:21:31.839968  411620 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:21:32.206433  411620 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:21:32.207041  411620 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:21:32.211248  411620 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:21:32.213444  411620 out.go:204]   - Booting up control plane ...
	I0717 18:21:32.213559  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:21:32.213654  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:21:32.213746  411620 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:21:32.227836  411620 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:21:32.228712  411620 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:21:32.228789  411620 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:21:32.352273  411620 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:21:32.352383  411620 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:21:32.853603  411620 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.726884ms
	I0717 18:21:32.853731  411620 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:21:38.851106  411620 kubeadm.go:310] [api-check] The API server is healthy after 6.000784359s
	I0717 18:21:38.867651  411620 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:21:38.881859  411620 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:21:38.910079  411620 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:21:38.910244  411620 kubeadm.go:310] [mark-control-plane] Marking the node ha-445282 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:21:38.928584  411620 kubeadm.go:310] [bootstrap-token] Using token: 1d2hng.iymafv4x15o5r3g5
	I0717 18:21:38.930137  411620 out.go:204]   - Configuring RBAC rules ...
	I0717 18:21:38.930242  411620 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:21:38.937081  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:21:38.949768  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:21:38.952623  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:21:38.955474  411620 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:21:38.958885  411620 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:21:39.259682  411620 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:21:39.683847  411620 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:21:40.257777  411620 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:21:40.258689  411620 kubeadm.go:310] 
	I0717 18:21:40.258756  411620 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:21:40.258782  411620 kubeadm.go:310] 
	I0717 18:21:40.258882  411620 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:21:40.258891  411620 kubeadm.go:310] 
	I0717 18:21:40.258925  411620 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:21:40.258999  411620 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:21:40.259073  411620 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:21:40.259088  411620 kubeadm.go:310] 
	I0717 18:21:40.259169  411620 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:21:40.259185  411620 kubeadm.go:310] 
	I0717 18:21:40.259224  411620 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:21:40.259230  411620 kubeadm.go:310] 
	I0717 18:21:40.259295  411620 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:21:40.259403  411620 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:21:40.259506  411620 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:21:40.259519  411620 kubeadm.go:310] 
	I0717 18:21:40.259653  411620 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:21:40.259758  411620 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:21:40.259769  411620 kubeadm.go:310] 
	I0717 18:21:40.259872  411620 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1d2hng.iymafv4x15o5r3g5 \
	I0717 18:21:40.259999  411620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 18:21:40.260043  411620 kubeadm.go:310] 	--control-plane 
	I0717 18:21:40.260055  411620 kubeadm.go:310] 
	I0717 18:21:40.260172  411620 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:21:40.260180  411620 kubeadm.go:310] 
	I0717 18:21:40.260277  411620 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1d2hng.iymafv4x15o5r3g5 \
	I0717 18:21:40.260425  411620 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 18:21:40.260799  411620 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:21:40.260848  411620 cni.go:84] Creating CNI manager for ""
	I0717 18:21:40.260860  411620 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 18:21:40.263475  411620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 18:21:40.264768  411620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 18:21:40.270397  411620 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 18:21:40.270415  411620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 18:21:40.288797  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 18:21:40.636457  411620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:21:40.636556  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:40.636556  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282 minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=true
	I0717 18:21:40.831014  411620 ops.go:34] apiserver oom_adj: -16
	I0717 18:21:40.834914  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:41.336024  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:41.835178  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:42.335924  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:42.835263  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:43.335949  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:43.835493  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:44.335826  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:44.836061  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:45.335816  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:45.835226  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:46.335188  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:46.835610  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:47.336038  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:47.835093  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:48.335826  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:48.835672  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:49.335488  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:49.835294  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:50.335063  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:50.835711  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:51.335151  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:51.835852  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.335270  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.835971  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:21:52.926916  411620 kubeadm.go:1113] duration metric: took 12.290444855s to wait for elevateKubeSystemPrivileges
	I0717 18:21:52.926959  411620 kubeadm.go:394] duration metric: took 24.401254511s to StartCluster
	I0717 18:21:52.926980  411620 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:52.927068  411620 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:21:52.927944  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:21:52.928193  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:21:52.928210  411620 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:21:52.928275  411620 addons.go:69] Setting storage-provisioner=true in profile "ha-445282"
	I0717 18:21:52.928185  411620 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:21:52.928317  411620 addons.go:69] Setting default-storageclass=true in profile "ha-445282"
	I0717 18:21:52.928331  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:21:52.928345  411620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-445282"
	I0717 18:21:52.928309  411620 addons.go:234] Setting addon storage-provisioner=true in "ha-445282"
	I0717 18:21:52.928417  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:52.928443  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:21:52.928784  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.928788  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.928824  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.928845  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.944255  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38437
	I0717 18:21:52.944403  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
	I0717 18:21:52.944761  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.944892  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.945267  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.945290  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.945474  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.945502  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.945645  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.945815  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.945868  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.946472  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.946523  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.948278  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:21:52.948693  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 18:21:52.949334  411620 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 18:21:52.949608  411620 addons.go:234] Setting addon default-storageclass=true in "ha-445282"
	I0717 18:21:52.949663  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:21:52.950039  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.950091  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.962693  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45731
	I0717 18:21:52.963272  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.963853  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.963880  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.964273  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.964468  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.965566  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0717 18:21:52.966185  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.966274  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:52.966730  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.966760  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.967216  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.967820  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:52.967872  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:52.968008  411620 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:21:52.969183  411620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:21:52.969202  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:21:52.969222  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:52.972583  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.973016  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:52.973042  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.973211  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:52.973411  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:52.973612  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:52.973747  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:52.983571  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0717 18:21:52.983974  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:52.984512  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:52.984538  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:52.984856  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:52.985062  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:21:52.986449  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:21:52.986759  411620 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:21:52.986788  411620 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:21:52.986809  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:21:52.989775  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.990209  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:21:52.990229  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:21:52.990380  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:21:52.990549  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:21:52.990718  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:21:52.990872  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:21:53.047876  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:21:53.115126  411620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:21:53.132943  411620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:21:53.303660  411620 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 18:21:53.553631  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553649  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553661  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.553667  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.553969  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.553985  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.553987  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.553998  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.553999  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.554009  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.554010  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.554063  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.554251  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.554276  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.555389  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.555410  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.555563  411620 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 18:21:53.555572  411620 round_trippers.go:469] Request Headers:
	I0717 18:21:53.555581  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:21:53.555587  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:21:53.564705  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:21:53.565391  411620 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 18:21:53.565408  411620 round_trippers.go:469] Request Headers:
	I0717 18:21:53.565419  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:21:53.565427  411620 round_trippers.go:473]     Content-Type: application/json
	I0717 18:21:53.565431  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:21:53.567364  411620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 18:21:53.567502  411620 main.go:141] libmachine: Making call to close driver server
	I0717 18:21:53.567516  411620 main.go:141] libmachine: (ha-445282) Calling .Close
	I0717 18:21:53.567902  411620 main.go:141] libmachine: (ha-445282) DBG | Closing plugin on server side
	I0717 18:21:53.567918  411620 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:21:53.567934  411620 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:21:53.569566  411620 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 18:21:53.570700  411620 addons.go:510] duration metric: took 642.489279ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 18:21:53.570735  411620 start.go:246] waiting for cluster config update ...
	I0717 18:21:53.570751  411620 start.go:255] writing updated cluster config ...
	I0717 18:21:53.572198  411620 out.go:177] 
	I0717 18:21:53.573378  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:21:53.573467  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:53.574946  411620 out.go:177] * Starting "ha-445282-m02" control-plane node in "ha-445282" cluster
	I0717 18:21:53.575738  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:21:53.575763  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:21:53.575881  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:21:53.575897  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:21:53.575986  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:21:53.576144  411620 start.go:360] acquireMachinesLock for ha-445282-m02: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:21:53.576186  411620 start.go:364] duration metric: took 23.895µs to acquireMachinesLock for "ha-445282-m02"
	I0717 18:21:53.576202  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:21:53.576266  411620 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 18:21:53.578158  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:21:53.578229  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:21:53.578260  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:21:53.594869  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41589
	I0717 18:21:53.595312  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:21:53.595780  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:21:53.595801  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:21:53.596092  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:21:53.596278  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:21:53.596398  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:21:53.596509  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:21:53.596538  411620 client.go:168] LocalClient.Create starting
	I0717 18:21:53.596573  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:21:53.596613  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:21:53.596636  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:21:53.596705  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:21:53.596731  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:21:53.596745  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:21:53.596770  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:21:53.596782  411620 main.go:141] libmachine: (ha-445282-m02) Calling .PreCreateCheck
	I0717 18:21:53.596936  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:21:53.597276  411620 main.go:141] libmachine: Creating machine...
	I0717 18:21:53.597290  411620 main.go:141] libmachine: (ha-445282-m02) Calling .Create
	I0717 18:21:53.597387  411620 main.go:141] libmachine: (ha-445282-m02) Creating KVM machine...
	I0717 18:21:53.598461  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found existing default KVM network
	I0717 18:21:53.598584  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found existing private KVM network mk-ha-445282
	I0717 18:21:53.598712  411620 main.go:141] libmachine: (ha-445282-m02) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 ...
	I0717 18:21:53.598744  411620 main.go:141] libmachine: (ha-445282-m02) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:21:53.598792  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.598687  412011 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:21:53.598902  411620 main.go:141] libmachine: (ha-445282-m02) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:21:53.845657  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.845504  412011 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa...
	I0717 18:21:53.958434  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.958300  412011 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/ha-445282-m02.rawdisk...
	I0717 18:21:53.958468  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Writing magic tar header
	I0717 18:21:53.958479  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Writing SSH key tar header
	I0717 18:21:53.958487  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:53.958411  412011 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 ...
	I0717 18:21:53.958499  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02
	I0717 18:21:53.958527  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02 (perms=drwx------)
	I0717 18:21:53.958553  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:21:53.958568  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:21:53.958579  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:21:53.958592  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:21:53.958602  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:21:53.958631  411620 main.go:141] libmachine: (ha-445282-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:21:53.958644  411620 main.go:141] libmachine: (ha-445282-m02) Creating domain...
	I0717 18:21:53.958690  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:21:53.958716  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:21:53.958729  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:21:53.958741  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:21:53.958753  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Checking permissions on dir: /home
	I0717 18:21:53.958763  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Skipping /home - not owner
	I0717 18:21:53.959673  411620 main.go:141] libmachine: (ha-445282-m02) define libvirt domain using xml: 
	I0717 18:21:53.959704  411620 main.go:141] libmachine: (ha-445282-m02) <domain type='kvm'>
	I0717 18:21:53.959711  411620 main.go:141] libmachine: (ha-445282-m02)   <name>ha-445282-m02</name>
	I0717 18:21:53.959718  411620 main.go:141] libmachine: (ha-445282-m02)   <memory unit='MiB'>2200</memory>
	I0717 18:21:53.959757  411620 main.go:141] libmachine: (ha-445282-m02)   <vcpu>2</vcpu>
	I0717 18:21:53.959781  411620 main.go:141] libmachine: (ha-445282-m02)   <features>
	I0717 18:21:53.959789  411620 main.go:141] libmachine: (ha-445282-m02)     <acpi/>
	I0717 18:21:53.959797  411620 main.go:141] libmachine: (ha-445282-m02)     <apic/>
	I0717 18:21:53.959803  411620 main.go:141] libmachine: (ha-445282-m02)     <pae/>
	I0717 18:21:53.959810  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.959818  411620 main.go:141] libmachine: (ha-445282-m02)   </features>
	I0717 18:21:53.959829  411620 main.go:141] libmachine: (ha-445282-m02)   <cpu mode='host-passthrough'>
	I0717 18:21:53.959838  411620 main.go:141] libmachine: (ha-445282-m02)   
	I0717 18:21:53.959850  411620 main.go:141] libmachine: (ha-445282-m02)   </cpu>
	I0717 18:21:53.959877  411620 main.go:141] libmachine: (ha-445282-m02)   <os>
	I0717 18:21:53.959897  411620 main.go:141] libmachine: (ha-445282-m02)     <type>hvm</type>
	I0717 18:21:53.959909  411620 main.go:141] libmachine: (ha-445282-m02)     <boot dev='cdrom'/>
	I0717 18:21:53.959923  411620 main.go:141] libmachine: (ha-445282-m02)     <boot dev='hd'/>
	I0717 18:21:53.959939  411620 main.go:141] libmachine: (ha-445282-m02)     <bootmenu enable='no'/>
	I0717 18:21:53.959954  411620 main.go:141] libmachine: (ha-445282-m02)   </os>
	I0717 18:21:53.959965  411620 main.go:141] libmachine: (ha-445282-m02)   <devices>
	I0717 18:21:53.959976  411620 main.go:141] libmachine: (ha-445282-m02)     <disk type='file' device='cdrom'>
	I0717 18:21:53.959992  411620 main.go:141] libmachine: (ha-445282-m02)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/boot2docker.iso'/>
	I0717 18:21:53.960001  411620 main.go:141] libmachine: (ha-445282-m02)       <target dev='hdc' bus='scsi'/>
	I0717 18:21:53.960007  411620 main.go:141] libmachine: (ha-445282-m02)       <readonly/>
	I0717 18:21:53.960013  411620 main.go:141] libmachine: (ha-445282-m02)     </disk>
	I0717 18:21:53.960019  411620 main.go:141] libmachine: (ha-445282-m02)     <disk type='file' device='disk'>
	I0717 18:21:53.960027  411620 main.go:141] libmachine: (ha-445282-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:21:53.960047  411620 main.go:141] libmachine: (ha-445282-m02)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/ha-445282-m02.rawdisk'/>
	I0717 18:21:53.960058  411620 main.go:141] libmachine: (ha-445282-m02)       <target dev='hda' bus='virtio'/>
	I0717 18:21:53.960066  411620 main.go:141] libmachine: (ha-445282-m02)     </disk>
	I0717 18:21:53.960077  411620 main.go:141] libmachine: (ha-445282-m02)     <interface type='network'>
	I0717 18:21:53.960090  411620 main.go:141] libmachine: (ha-445282-m02)       <source network='mk-ha-445282'/>
	I0717 18:21:53.960100  411620 main.go:141] libmachine: (ha-445282-m02)       <model type='virtio'/>
	I0717 18:21:53.960114  411620 main.go:141] libmachine: (ha-445282-m02)     </interface>
	I0717 18:21:53.960131  411620 main.go:141] libmachine: (ha-445282-m02)     <interface type='network'>
	I0717 18:21:53.960144  411620 main.go:141] libmachine: (ha-445282-m02)       <source network='default'/>
	I0717 18:21:53.960155  411620 main.go:141] libmachine: (ha-445282-m02)       <model type='virtio'/>
	I0717 18:21:53.960164  411620 main.go:141] libmachine: (ha-445282-m02)     </interface>
	I0717 18:21:53.960174  411620 main.go:141] libmachine: (ha-445282-m02)     <serial type='pty'>
	I0717 18:21:53.960183  411620 main.go:141] libmachine: (ha-445282-m02)       <target port='0'/>
	I0717 18:21:53.960192  411620 main.go:141] libmachine: (ha-445282-m02)     </serial>
	I0717 18:21:53.960204  411620 main.go:141] libmachine: (ha-445282-m02)     <console type='pty'>
	I0717 18:21:53.960221  411620 main.go:141] libmachine: (ha-445282-m02)       <target type='serial' port='0'/>
	I0717 18:21:53.960233  411620 main.go:141] libmachine: (ha-445282-m02)     </console>
	I0717 18:21:53.960243  411620 main.go:141] libmachine: (ha-445282-m02)     <rng model='virtio'>
	I0717 18:21:53.960253  411620 main.go:141] libmachine: (ha-445282-m02)       <backend model='random'>/dev/random</backend>
	I0717 18:21:53.960261  411620 main.go:141] libmachine: (ha-445282-m02)     </rng>
	I0717 18:21:53.960266  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.960270  411620 main.go:141] libmachine: (ha-445282-m02)     
	I0717 18:21:53.960278  411620 main.go:141] libmachine: (ha-445282-m02)   </devices>
	I0717 18:21:53.960282  411620 main.go:141] libmachine: (ha-445282-m02) </domain>
	I0717 18:21:53.960301  411620 main.go:141] libmachine: (ha-445282-m02) 
	I0717 18:21:53.966620  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:c0:49:b8 in network default
	I0717 18:21:53.967184  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring networks are active...
	I0717 18:21:53.967211  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:53.967832  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring network default is active
	I0717 18:21:53.968125  411620 main.go:141] libmachine: (ha-445282-m02) Ensuring network mk-ha-445282 is active
	I0717 18:21:53.968497  411620 main.go:141] libmachine: (ha-445282-m02) Getting domain xml...
	I0717 18:21:53.969087  411620 main.go:141] libmachine: (ha-445282-m02) Creating domain...
	I0717 18:21:55.188085  411620 main.go:141] libmachine: (ha-445282-m02) Waiting to get IP...
	I0717 18:21:55.188904  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.189294  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.189322  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.189259  412011 retry.go:31] will retry after 207.621374ms: waiting for machine to come up
	I0717 18:21:55.398896  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.399353  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.399382  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.399306  412011 retry.go:31] will retry after 297.6147ms: waiting for machine to come up
	I0717 18:21:55.698557  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:55.699049  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:55.699073  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:55.698992  412011 retry.go:31] will retry after 352.642718ms: waiting for machine to come up
	I0717 18:21:56.053750  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.054148  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.054180  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.054105  412011 retry.go:31] will retry after 449.896159ms: waiting for machine to come up
	I0717 18:21:56.505896  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.506320  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.506348  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.506272  412011 retry.go:31] will retry after 487.736707ms: waiting for machine to come up
	I0717 18:21:56.995968  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:56.996402  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:56.996435  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:56.996331  412011 retry.go:31] will retry after 890.067855ms: waiting for machine to come up
	I0717 18:21:57.888589  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:57.889015  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:57.889049  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:57.888952  412011 retry.go:31] will retry after 932.094508ms: waiting for machine to come up
	I0717 18:21:58.823844  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:21:58.824672  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:21:58.824704  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:21:58.824619  412011 retry.go:31] will retry after 1.360476703s: waiting for machine to come up
	I0717 18:22:00.187007  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:00.187403  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:00.187433  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:00.187349  412011 retry.go:31] will retry after 1.695987259s: waiting for machine to come up
	I0717 18:22:01.885130  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:01.885528  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:01.885557  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:01.885486  412011 retry.go:31] will retry after 2.149050919s: waiting for machine to come up
	I0717 18:22:04.035710  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:04.036117  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:04.036148  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:04.036064  412011 retry.go:31] will retry after 1.757259212s: waiting for machine to come up
	I0717 18:22:05.795253  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:05.795675  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:05.795705  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:05.795644  412011 retry.go:31] will retry after 2.675849294s: waiting for machine to come up
	I0717 18:22:08.474451  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:08.474792  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:08.474828  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:08.474736  412011 retry.go:31] will retry after 3.611039345s: waiting for machine to come up
	I0717 18:22:12.086972  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:12.087451  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find current IP address of domain ha-445282-m02 in network mk-ha-445282
	I0717 18:22:12.087476  411620 main.go:141] libmachine: (ha-445282-m02) DBG | I0717 18:22:12.087390  412011 retry.go:31] will retry after 5.26115106s: waiting for machine to come up
	I0717 18:22:17.349693  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.350199  411620 main.go:141] libmachine: (ha-445282-m02) Found IP for machine: 192.168.39.198
	I0717 18:22:17.350228  411620 main.go:141] libmachine: (ha-445282-m02) Reserving static IP address...
	I0717 18:22:17.350237  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.350593  411620 main.go:141] libmachine: (ha-445282-m02) DBG | unable to find host DHCP lease matching {name: "ha-445282-m02", mac: "52:54:00:a6:a9:c1", ip: "192.168.39.198"} in network mk-ha-445282
	I0717 18:22:17.426961  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Getting to WaitForSSH function...
	I0717 18:22:17.426997  411620 main.go:141] libmachine: (ha-445282-m02) Reserved static IP address: 192.168.39.198
	I0717 18:22:17.427009  411620 main.go:141] libmachine: (ha-445282-m02) Waiting for SSH to be available...
	I0717 18:22:17.430298  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.430735  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.430764  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.430907  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using SSH client type: external
	I0717 18:22:17.430935  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa (-rw-------)
	I0717 18:22:17.430970  411620 main.go:141] libmachine: (ha-445282-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:22:17.430985  411620 main.go:141] libmachine: (ha-445282-m02) DBG | About to run SSH command:
	I0717 18:22:17.430998  411620 main.go:141] libmachine: (ha-445282-m02) DBG | exit 0
	I0717 18:22:17.556606  411620 main.go:141] libmachine: (ha-445282-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 18:22:17.556877  411620 main.go:141] libmachine: (ha-445282-m02) KVM machine creation complete!
	I0717 18:22:17.557176  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:22:17.557714  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:17.557959  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:17.558130  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:22:17.558166  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:22:17.559776  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:22:17.559800  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:22:17.559808  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:22:17.559814  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.562429  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.562845  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.562884  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.563047  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.563247  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.563422  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.563546  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.563707  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.563994  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.564010  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:22:17.667823  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:22:17.667848  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:22:17.667856  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.671014  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.671389  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.671420  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.671571  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.671811  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.672000  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.672110  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.672287  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.672514  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.672530  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:22:17.781418  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:22:17.781532  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:22:17.781546  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:22:17.781555  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:17.781830  411620 buildroot.go:166] provisioning hostname "ha-445282-m02"
	I0717 18:22:17.781854  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:17.782076  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.784828  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.785192  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.785226  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.785374  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.785556  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.785732  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.785894  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.786034  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.786203  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.786215  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282-m02 && echo "ha-445282-m02" | sudo tee /etc/hostname
	I0717 18:22:17.902339  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282-m02
	
	I0717 18:22:17.902376  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:17.904945  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.905270  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:17.905298  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:17.905480  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:17.905686  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.905843  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:17.905973  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:17.906160  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:17.906402  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:17.906425  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:22:18.018118  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:22:18.018160  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:22:18.018182  411620 buildroot.go:174] setting up certificates
	I0717 18:22:18.018194  411620 provision.go:84] configureAuth start
	I0717 18:22:18.018204  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetMachineName
	I0717 18:22:18.018501  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.021271  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.021744  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.021782  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.021943  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.024598  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.024981  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.025018  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.025148  411620 provision.go:143] copyHostCerts
	I0717 18:22:18.025184  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:22:18.025229  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:22:18.025240  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:22:18.025308  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:22:18.025397  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:22:18.025414  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:22:18.025421  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:22:18.025443  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:22:18.025488  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:22:18.025503  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:22:18.025509  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:22:18.025528  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:22:18.025576  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282-m02 san=[127.0.0.1 192.168.39.198 ha-445282-m02 localhost minikube]
	I0717 18:22:18.116857  411620 provision.go:177] copyRemoteCerts
	I0717 18:22:18.116917  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:22:18.116942  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.119855  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.120188  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.120223  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.120428  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.120612  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.120797  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.120917  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.204519  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:22:18.204602  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:22:18.228303  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:22:18.228386  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:22:18.252956  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:22:18.253028  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:22:18.277615  411620 provision.go:87] duration metric: took 259.401212ms to configureAuth
	I0717 18:22:18.277650  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:22:18.277828  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:18.277902  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.280846  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.281294  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.281327  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.281567  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.281809  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.281997  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.282157  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.282395  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:18.282580  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:18.282595  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:22:18.572242  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:22:18.572270  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:22:18.572279  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetURL
	I0717 18:22:18.573653  411620 main.go:141] libmachine: (ha-445282-m02) DBG | Using libvirt version 6000000
	I0717 18:22:18.576062  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.576421  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.576447  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.576648  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:22:18.576667  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:22:18.576676  411620 client.go:171] duration metric: took 24.980126441s to LocalClient.Create
	I0717 18:22:18.576706  411620 start.go:167] duration metric: took 24.980198027s to libmachine.API.Create "ha-445282"
	I0717 18:22:18.576718  411620 start.go:293] postStartSetup for "ha-445282-m02" (driver="kvm2")
	I0717 18:22:18.576733  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:22:18.576758  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.577030  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:22:18.577055  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.579483  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.579821  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.579852  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.580059  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.580319  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.580465  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.580640  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.663436  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:22:18.667924  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:22:18.667949  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:22:18.668021  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:22:18.668112  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:22:18.668125  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:22:18.668231  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:22:18.678158  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:22:18.706673  411620 start.go:296] duration metric: took 129.933856ms for postStartSetup
	I0717 18:22:18.706734  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetConfigRaw
	I0717 18:22:18.707470  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.710115  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.710530  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.710555  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.710807  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:22:18.711016  411620 start.go:128] duration metric: took 25.13473763s to createHost
	I0717 18:22:18.711040  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.713449  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.713793  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.713819  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.714025  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.714208  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.714357  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.714489  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.714616  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:22:18.714806  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I0717 18:22:18.714819  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:22:18.821413  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240538.800156557
	
	I0717 18:22:18.821446  411620 fix.go:216] guest clock: 1721240538.800156557
	I0717 18:22:18.821455  411620 fix.go:229] Guest: 2024-07-17 18:22:18.800156557 +0000 UTC Remote: 2024-07-17 18:22:18.711027236 +0000 UTC m=+81.310705212 (delta=89.129321ms)
	I0717 18:22:18.821477  411620 fix.go:200] guest clock delta is within tolerance: 89.129321ms
	I0717 18:22:18.821484  411620 start.go:83] releasing machines lock for "ha-445282-m02", held for 25.245288365s
	I0717 18:22:18.821509  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.821821  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:18.824555  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.824950  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.824978  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.827412  411620 out.go:177] * Found network options:
	I0717 18:22:18.828814  411620 out.go:177]   - NO_PROXY=192.168.39.147
	W0717 18:22:18.830089  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:22:18.830115  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830694  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830893  411620 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:22:18.830978  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:22:18.831022  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	W0717 18:22:18.831109  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:22:18.831206  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:22:18.831230  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:22:18.833539  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.833781  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.833909  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.833935  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.834027  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.834172  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:18.834182  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.834217  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:18.834325  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:22:18.834383  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.834502  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:22:18.834508  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:18.834656  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:22:18.834819  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:22:19.066807  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:22:19.073339  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:22:19.073398  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:22:19.090466  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:22:19.090499  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:22:19.090581  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:22:19.106914  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:22:19.121704  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:22:19.121767  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:22:19.136199  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:22:19.151333  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:22:19.278557  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:22:19.448040  411620 docker.go:233] disabling docker service ...
	I0717 18:22:19.448138  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:22:19.462987  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:22:19.475866  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:22:19.600005  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:22:19.731330  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:22:19.745362  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:22:19.763506  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:22:19.763647  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.773912  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:22:19.773988  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.784000  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.794123  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.804403  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:22:19.814801  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.824951  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.844093  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:22:19.856601  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:22:19.867850  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:22:19.867922  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:22:19.884094  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:22:19.895690  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:20.020399  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:22:20.158643  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:22:20.158733  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:22:20.163293  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:22:20.163344  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:22:20.166947  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:22:20.203400  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:22:20.203494  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:22:20.234832  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:22:20.264801  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:22:20.266229  411620 out.go:177]   - env NO_PROXY=192.168.39.147
	I0717 18:22:20.267600  411620 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:22:20.270264  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:20.270624  411620 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:22:07 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:22:20.270655  411620 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:22:20.270878  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:22:20.275383  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:22:20.289213  411620 mustload.go:65] Loading cluster: ha-445282
	I0717 18:22:20.289486  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:20.289815  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:20.289854  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:20.305066  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44657
	I0717 18:22:20.305592  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:20.306084  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:20.306107  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:20.306476  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:20.306661  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:22:20.308332  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:22:20.308720  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:20.308757  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:20.323693  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0717 18:22:20.324190  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:20.324723  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:20.324751  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:20.325057  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:20.325274  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:22:20.325471  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.198
	I0717 18:22:20.325486  411620 certs.go:194] generating shared ca certs ...
	I0717 18:22:20.325505  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.325667  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:22:20.325708  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:22:20.325718  411620 certs.go:256] generating profile certs ...
	I0717 18:22:20.325788  411620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:22:20.325812  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4
	I0717 18:22:20.325827  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.254]
	I0717 18:22:20.482321  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 ...
	I0717 18:22:20.482352  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4: {Name:mk99f343f9591038fc52d5d3eb699d6c2e430eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.482519  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4 ...
	I0717 18:22:20.482533  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4: {Name:mkcee6298db383444a1d2160d83549ebfb92dfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:22:20.482600  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.80739ac4 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:22:20.482729  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.80739ac4 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:22:20.482856  411620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:22:20.482873  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:22:20.482885  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:22:20.482898  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:22:20.482910  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:22:20.482921  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:22:20.482931  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:22:20.482940  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:22:20.482949  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:22:20.482997  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:22:20.483024  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:22:20.483034  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:22:20.483054  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:22:20.483073  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:22:20.483096  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:22:20.483130  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:22:20.483154  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:22:20.483167  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:22:20.483178  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:20.483212  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:22:20.485999  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:20.486374  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:22:20.486404  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:20.486603  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:22:20.486820  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:22:20.487011  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:22:20.487139  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:22:20.560956  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 18:22:20.566416  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 18:22:20.581062  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 18:22:20.585730  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 18:22:20.597251  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 18:22:20.601984  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 18:22:20.613162  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 18:22:20.617881  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 18:22:20.628214  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 18:22:20.632470  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 18:22:20.642075  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 18:22:20.646356  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 18:22:20.657213  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:22:20.681109  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:22:20.703426  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:22:20.726808  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:22:20.750263  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 18:22:20.775428  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:22:20.799369  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:22:20.823943  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:22:20.847480  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:22:20.870382  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:22:20.894238  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:22:20.916414  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 18:22:20.941744  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 18:22:20.958242  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 18:22:20.975232  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 18:22:20.991557  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 18:22:21.008259  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 18:22:21.026700  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 18:22:21.045767  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:22:21.051756  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:22:21.063164  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.067910  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.067974  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:22:21.073670  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:22:21.084567  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:22:21.095266  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.099539  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.099593  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:22:21.105114  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:22:21.115993  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:22:21.127214  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.132019  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.132078  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:22:21.137910  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:22:21.148844  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:22:21.153003  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:22:21.153057  411620 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.30.2 crio true true} ...
	I0717 18:22:21.153144  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:22:21.153168  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:22:21.153241  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:22:21.171613  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:22:21.171707  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:22:21.171771  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:22:21.182446  411620 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 18:22:21.182519  411620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 18:22:21.193563  411620 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 18:22:21.193574  411620 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 18:22:21.193561  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 18:22:21.193633  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:22:21.193707  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:22:21.198328  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 18:22:21.198359  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 18:22:22.297113  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:22:22.297206  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:22:22.302199  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 18:22:22.302234  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 18:22:22.512666  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:22:22.538115  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:22:22.538234  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:22:22.545442  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 18:22:22.545486  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 18:22:22.958936  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 18:22:22.968857  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:22:22.986120  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:22:23.003113  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:22:23.020072  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:22:23.023996  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:22:23.036140  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:23.155530  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:22:23.173036  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:22:23.173409  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:22:23.173475  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:22:23.188641  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I0717 18:22:23.189112  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:22:23.189587  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:22:23.189611  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:22:23.190011  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:22:23.190247  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:22:23.190450  411620 start.go:317] joinCluster: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:22:23.190573  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 18:22:23.190598  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:22:23.193903  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:23.194385  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:22:23.194416  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:22:23.194589  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:22:23.194769  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:22:23.194939  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:22:23.195081  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:22:23.356747  411620 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:22:23.356804  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token usqba2.lrked5kopejozm88 --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I0717 18:22:45.630321  411620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token usqba2.lrked5kopejozm88 --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (22.273491175s)
	I0717 18:22:45.630364  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 18:22:46.192092  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282-m02 minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=false
	I0717 18:22:46.313299  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-445282-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 18:22:46.446377  411620 start.go:319] duration metric: took 23.255923711s to joinCluster
	I0717 18:22:46.446481  411620 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:22:46.446836  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:22:46.448057  411620 out.go:177] * Verifying Kubernetes components...
	I0717 18:22:46.449426  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:22:46.675775  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:22:46.731102  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:22:46.731356  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 18:22:46.731435  411620 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.147:8443
	I0717 18:22:46.731667  411620 node_ready.go:35] waiting up to 6m0s for node "ha-445282-m02" to be "Ready" ...
	I0717 18:22:46.731771  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:46.731782  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:46.731793  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:46.731805  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:46.740915  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:22:47.232891  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:47.232914  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:47.232922  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:47.232927  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:47.236320  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:47.732030  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:47.732052  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:47.732060  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:47.732065  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:47.736981  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:22:48.232321  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:48.232341  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:48.232349  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:48.232354  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:48.235414  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:48.732177  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:48.732201  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:48.732209  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:48.732217  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:48.735714  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:48.736615  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:49.232003  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:49.232027  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:49.232035  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:49.232039  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:49.235207  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:49.732137  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:49.732161  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:49.732172  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:49.732178  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:49.735720  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:50.232701  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:50.232732  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:50.232745  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:50.232752  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:50.236791  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:22:50.732729  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:50.732753  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:50.732762  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:50.732766  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:50.736335  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:50.736999  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:51.232430  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:51.232455  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:51.232467  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:51.232473  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:51.235591  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:51.732508  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:51.732528  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:51.732540  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:51.732544  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:51.735795  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:52.232856  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:52.232885  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:52.232893  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:52.232898  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:52.236521  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:52.731955  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:52.731977  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:52.731986  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:52.731990  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:52.735025  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:53.232765  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:53.232786  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:53.232795  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:53.232799  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:53.235400  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:53.236117  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:53.732470  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:53.732499  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:53.732507  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:53.732513  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:53.735581  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:54.232362  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:54.232386  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:54.232397  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:54.232404  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:54.236199  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:54.732685  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:54.732710  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:54.732718  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:54.732721  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:54.735994  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:55.232669  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:55.232693  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:55.232704  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:55.232710  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:55.235921  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:55.236460  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:55.732753  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:55.732778  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:55.732789  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:55.732795  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:55.735781  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:56.232861  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:56.232883  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:56.232892  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:56.232900  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:56.236875  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:56.732241  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:56.732264  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:56.732271  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:56.732276  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:56.735270  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:57.232261  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:57.232283  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:57.232291  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:57.232295  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:57.235265  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:57.732172  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:57.732193  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:57.732201  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:57.732208  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:57.735522  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:57.736392  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:22:58.232336  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:58.232362  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:58.232373  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:58.232380  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:58.235729  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:58.731973  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:58.731996  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:58.732007  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:58.732013  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:58.734822  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:22:59.231899  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:59.231923  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:59.231934  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:59.231941  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:59.235367  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:59.732702  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:22:59.732725  411620 round_trippers.go:469] Request Headers:
	I0717 18:22:59.732736  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:22:59.732741  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:22:59.735902  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:22:59.736479  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:23:00.232808  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:00.232834  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:00.232844  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:00.232850  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:00.236175  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:00.732185  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:00.732211  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:00.732222  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:00.732227  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:00.735924  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:01.232837  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:01.232866  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:01.232876  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:01.232881  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:01.236027  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:01.731946  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:01.731970  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:01.731978  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:01.731985  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:01.735059  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:02.232608  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:02.232636  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:02.232648  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:02.232656  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:02.237624  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:02.238784  411620 node_ready.go:53] node "ha-445282-m02" has status "Ready":"False"
	I0717 18:23:02.732902  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:02.732932  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:02.732943  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:02.732949  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:02.736916  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.231916  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:03.231944  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.231955  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.231960  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.238957  411620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 18:23:03.239566  411620 node_ready.go:49] node "ha-445282-m02" has status "Ready":"True"
	I0717 18:23:03.239590  411620 node_ready.go:38] duration metric: took 16.507907061s for node "ha-445282-m02" to be "Ready" ...
	I0717 18:23:03.239602  411620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:23:03.239713  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:03.239726  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.239737  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.239742  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.245311  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:03.252420  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.252519  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28njs
	I0717 18:23:03.252531  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.252540  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.252547  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.255347  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:23:03.255945  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.255961  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.255968  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.255973  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.259166  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.259664  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.259685  411620 pod_ready.go:81] duration metric: took 7.241162ms for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.259700  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.259777  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rzxbr
	I0717 18:23:03.259787  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.259798  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.259807  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.264083  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:03.264753  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.264774  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.264783  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.264790  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.267576  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:23:03.268627  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.268646  411620 pod_ready.go:81] duration metric: took 8.935277ms for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.268655  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.268716  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282
	I0717 18:23:03.268725  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.268732  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.268736  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.272514  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.273732  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.273748  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.273755  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.273758  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.276933  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.277751  411620 pod_ready.go:92] pod "etcd-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.277772  411620 pod_ready.go:81] duration metric: took 9.109829ms for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.277783  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.277844  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m02
	I0717 18:23:03.277854  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.277871  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.277882  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.281985  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:03.282692  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:03.282707  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.282713  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.282717  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.286570  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.287114  411620 pod_ready.go:92] pod "etcd-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.287140  411620 pod_ready.go:81] duration metric: took 9.34744ms for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.287158  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.432569  411620 request.go:629] Waited for 145.334031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:23:03.432644  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:23:03.432649  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.432658  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.432666  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.436375  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.632592  411620 request.go:629] Waited for 195.443141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.632661  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:03.632666  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.632674  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.632679  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.636332  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:03.636982  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:03.637005  411620 pod_ready.go:81] duration metric: took 349.832596ms for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.637016  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:03.832089  411620 request.go:629] Waited for 194.99822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:23:03.832155  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:23:03.832161  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:03.832172  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:03.832178  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:03.835467  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.032472  411620 request.go:629] Waited for 196.385406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.032576  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.032582  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.032590  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.032598  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.036568  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.037085  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.037105  411620 pod_ready.go:81] duration metric: took 400.081261ms for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.037119  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.232297  411620 request.go:629] Waited for 195.094299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:23:04.232379  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:23:04.232384  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.232392  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.232397  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.235597  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.432692  411620 request.go:629] Waited for 196.36902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:04.432749  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:04.432754  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.432761  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.432766  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.436136  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.436922  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.436948  411620 pod_ready.go:81] duration metric: took 399.821785ms for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.436958  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.631980  411620 request.go:629] Waited for 194.915166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:23:04.632054  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:23:04.632062  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.632073  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.632085  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.635130  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.832306  411620 request.go:629] Waited for 196.372293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.832379  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:04.832386  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:04.832398  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:04.832406  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:04.835884  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:04.836310  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:04.836326  411620 pod_ready.go:81] duration metric: took 399.360617ms for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:04.836337  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.032499  411620 request.go:629] Waited for 196.065865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:23:05.032575  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:23:05.032580  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.032588  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.032597  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.037228  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:05.232494  411620 request.go:629] Waited for 194.354027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:05.232574  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:05.232593  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.232607  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.232614  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.235981  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.236788  411620 pod_ready.go:92] pod "kube-proxy-vxmp8" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:05.236810  411620 pod_ready.go:81] duration metric: took 400.467642ms for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.236821  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.432728  411620 request.go:629] Waited for 195.789224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:23:05.432813  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:23:05.432825  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.432835  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.432845  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.436657  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.632936  411620 request.go:629] Waited for 195.401534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:05.633003  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:05.633009  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.633016  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.633021  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.636228  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:05.636856  411620 pod_ready.go:92] pod "kube-proxy-xs65r" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:05.636885  411620 pod_ready.go:81] duration metric: took 400.05579ms for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.636898  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:05.832889  411620 request.go:629] Waited for 195.892653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:23:05.832952  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:23:05.832956  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:05.832964  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:05.832970  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:05.836805  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.032833  411620 request.go:629] Waited for 195.335122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:06.032896  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:23:06.032903  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.032914  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.032921  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.036958  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:23:06.037467  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:06.037485  411620 pod_ready.go:81] duration metric: took 400.575993ms for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.037496  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.232622  411620 request.go:629] Waited for 195.022731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:23:06.232688  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:23:06.232693  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.232701  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.232706  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.236081  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.432069  411620 request.go:629] Waited for 195.338129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:06.432137  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:23:06.432144  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.432151  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.432155  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.435442  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.436165  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:23:06.436195  411620 pod_ready.go:81] duration metric: took 398.690878ms for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:23:06.436210  411620 pod_ready.go:38] duration metric: took 3.196568559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:23:06.436232  411620 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:23:06.436297  411620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:23:06.453751  411620 api_server.go:72] duration metric: took 20.007218471s to wait for apiserver process to appear ...
	I0717 18:23:06.453783  411620 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:23:06.453815  411620 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0717 18:23:06.458696  411620 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0717 18:23:06.458776  411620 round_trippers.go:463] GET https://192.168.39.147:8443/version
	I0717 18:23:06.458783  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.458791  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.458797  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.459817  411620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 18:23:06.459983  411620 api_server.go:141] control plane version: v1.30.2
	I0717 18:23:06.460003  411620 api_server.go:131] duration metric: took 6.212787ms to wait for apiserver health ...
	I0717 18:23:06.460013  411620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:23:06.632533  411620 request.go:629] Waited for 172.391381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:06.632629  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:06.632639  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.632656  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.632666  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.638297  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:06.642503  411620 system_pods.go:59] 17 kube-system pods found
	I0717 18:23:06.642532  411620 system_pods.go:61] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:23:06.642552  411620 system_pods.go:61] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:23:06.642558  411620 system_pods.go:61] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:23:06.642563  411620 system_pods.go:61] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:23:06.642567  411620 system_pods.go:61] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:23:06.642574  411620 system_pods.go:61] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:23:06.642579  411620 system_pods.go:61] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:23:06.642587  411620 system_pods.go:61] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:23:06.642593  411620 system_pods.go:61] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:23:06.642597  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:23:06.642603  411620 system_pods.go:61] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:23:06.642606  411620 system_pods.go:61] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:23:06.642611  411620 system_pods.go:61] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:23:06.642614  411620 system_pods.go:61] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:23:06.642620  411620 system_pods.go:61] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:23:06.642623  411620 system_pods.go:61] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:23:06.642628  411620 system_pods.go:61] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:23:06.642639  411620 system_pods.go:74] duration metric: took 182.619199ms to wait for pod list to return data ...
	I0717 18:23:06.642649  411620 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:23:06.832036  411620 request.go:629] Waited for 189.29106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:23:06.832148  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:23:06.832162  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:06.832172  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:06.832178  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:06.835330  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:06.835603  411620 default_sa.go:45] found service account: "default"
	I0717 18:23:06.835627  411620 default_sa.go:55] duration metric: took 192.966758ms for default service account to be created ...
	I0717 18:23:06.835635  411620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:23:07.032871  411620 request.go:629] Waited for 197.140021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:07.032955  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:23:07.032967  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:07.032976  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:07.032983  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:07.038873  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:23:07.043367  411620 system_pods.go:86] 17 kube-system pods found
	I0717 18:23:07.043395  411620 system_pods.go:89] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:23:07.043400  411620 system_pods.go:89] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:23:07.043405  411620 system_pods.go:89] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:23:07.043409  411620 system_pods.go:89] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:23:07.043413  411620 system_pods.go:89] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:23:07.043418  411620 system_pods.go:89] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:23:07.043423  411620 system_pods.go:89] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:23:07.043430  411620 system_pods.go:89] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:23:07.043441  411620 system_pods.go:89] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:23:07.043448  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:23:07.043457  411620 system_pods.go:89] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:23:07.043463  411620 system_pods.go:89] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:23:07.043468  411620 system_pods.go:89] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:23:07.043473  411620 system_pods.go:89] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:23:07.043478  411620 system_pods.go:89] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:23:07.043481  411620 system_pods.go:89] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:23:07.043485  411620 system_pods.go:89] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:23:07.043492  411620 system_pods.go:126] duration metric: took 207.85115ms to wait for k8s-apps to be running ...
	I0717 18:23:07.043502  411620 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:23:07.043559  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:23:07.064349  411620 system_svc.go:56] duration metric: took 20.831074ms WaitForService to wait for kubelet
	I0717 18:23:07.064384  411620 kubeadm.go:582] duration metric: took 20.617857546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:23:07.064407  411620 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:23:07.232855  411620 request.go:629] Waited for 168.360051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes
	I0717 18:23:07.232915  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes
	I0717 18:23:07.232920  411620 round_trippers.go:469] Request Headers:
	I0717 18:23:07.232927  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:23:07.232932  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:23:07.236514  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:23:07.237354  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:23:07.237374  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:23:07.237385  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:23:07.237389  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:23:07.237393  411620 node_conditions.go:105] duration metric: took 172.980945ms to run NodePressure ...
	I0717 18:23:07.237405  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:23:07.237432  411620 start.go:255] writing updated cluster config ...
	I0717 18:23:07.239845  411620 out.go:177] 
	I0717 18:23:07.242288  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:07.242385  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:07.244139  411620 out.go:177] * Starting "ha-445282-m03" control-plane node in "ha-445282" cluster
	I0717 18:23:07.245356  411620 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:23:07.245382  411620 cache.go:56] Caching tarball of preloaded images
	I0717 18:23:07.245493  411620 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:23:07.245504  411620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:23:07.245593  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:07.245756  411620 start.go:360] acquireMachinesLock for ha-445282-m03: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:23:07.245799  411620 start.go:364] duration metric: took 22.216µs to acquireMachinesLock for "ha-445282-m03"
	I0717 18:23:07.245813  411620 start.go:93] Provisioning new machine with config: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:23:07.245958  411620 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 18:23:07.247628  411620 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:23:07.247726  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:07.247765  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:07.263749  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0717 18:23:07.264308  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:07.264900  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:07.264928  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:07.265246  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:07.265467  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:07.265622  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:07.265806  411620 start.go:159] libmachine.API.Create for "ha-445282" (driver="kvm2")
	I0717 18:23:07.265840  411620 client.go:168] LocalClient.Create starting
	I0717 18:23:07.265882  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 18:23:07.265925  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:07.265950  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:07.266017  411620 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 18:23:07.266044  411620 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:07.266065  411620 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:07.266093  411620 main.go:141] libmachine: Running pre-create checks...
	I0717 18:23:07.266105  411620 main.go:141] libmachine: (ha-445282-m03) Calling .PreCreateCheck
	I0717 18:23:07.266260  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:07.266679  411620 main.go:141] libmachine: Creating machine...
	I0717 18:23:07.266698  411620 main.go:141] libmachine: (ha-445282-m03) Calling .Create
	I0717 18:23:07.266819  411620 main.go:141] libmachine: (ha-445282-m03) Creating KVM machine...
	I0717 18:23:07.268181  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found existing default KVM network
	I0717 18:23:07.268340  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found existing private KVM network mk-ha-445282
	I0717 18:23:07.268466  411620 main.go:141] libmachine: (ha-445282-m03) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 ...
	I0717 18:23:07.268521  411620 main.go:141] libmachine: (ha-445282-m03) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:23:07.268567  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.268445  412407 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:23:07.268680  411620 main.go:141] libmachine: (ha-445282-m03) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:23:07.532529  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.532372  412407 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa...
	I0717 18:23:07.686598  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.686461  412407 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/ha-445282-m03.rawdisk...
	I0717 18:23:07.686654  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Writing magic tar header
	I0717 18:23:07.686670  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Writing SSH key tar header
	I0717 18:23:07.687972  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:07.687403  412407 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 ...
	I0717 18:23:07.688022  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03
	I0717 18:23:07.688046  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03 (perms=drwx------)
	I0717 18:23:07.688069  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:23:07.688077  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 18:23:07.688111  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 18:23:07.688142  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:23:07.688152  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 18:23:07.688162  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:23:07.688173  411620 main.go:141] libmachine: (ha-445282-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:23:07.688189  411620 main.go:141] libmachine: (ha-445282-m03) Creating domain...
	I0717 18:23:07.688204  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 18:23:07.688216  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:23:07.688227  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:23:07.688239  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Checking permissions on dir: /home
	I0717 18:23:07.688250  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Skipping /home - not owner
	I0717 18:23:07.689253  411620 main.go:141] libmachine: (ha-445282-m03) define libvirt domain using xml: 
	I0717 18:23:07.689275  411620 main.go:141] libmachine: (ha-445282-m03) <domain type='kvm'>
	I0717 18:23:07.689283  411620 main.go:141] libmachine: (ha-445282-m03)   <name>ha-445282-m03</name>
	I0717 18:23:07.689287  411620 main.go:141] libmachine: (ha-445282-m03)   <memory unit='MiB'>2200</memory>
	I0717 18:23:07.689293  411620 main.go:141] libmachine: (ha-445282-m03)   <vcpu>2</vcpu>
	I0717 18:23:07.689298  411620 main.go:141] libmachine: (ha-445282-m03)   <features>
	I0717 18:23:07.689304  411620 main.go:141] libmachine: (ha-445282-m03)     <acpi/>
	I0717 18:23:07.689311  411620 main.go:141] libmachine: (ha-445282-m03)     <apic/>
	I0717 18:23:07.689316  411620 main.go:141] libmachine: (ha-445282-m03)     <pae/>
	I0717 18:23:07.689320  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689326  411620 main.go:141] libmachine: (ha-445282-m03)   </features>
	I0717 18:23:07.689337  411620 main.go:141] libmachine: (ha-445282-m03)   <cpu mode='host-passthrough'>
	I0717 18:23:07.689344  411620 main.go:141] libmachine: (ha-445282-m03)   
	I0717 18:23:07.689349  411620 main.go:141] libmachine: (ha-445282-m03)   </cpu>
	I0717 18:23:07.689377  411620 main.go:141] libmachine: (ha-445282-m03)   <os>
	I0717 18:23:07.689412  411620 main.go:141] libmachine: (ha-445282-m03)     <type>hvm</type>
	I0717 18:23:07.689423  411620 main.go:141] libmachine: (ha-445282-m03)     <boot dev='cdrom'/>
	I0717 18:23:07.689430  411620 main.go:141] libmachine: (ha-445282-m03)     <boot dev='hd'/>
	I0717 18:23:07.689438  411620 main.go:141] libmachine: (ha-445282-m03)     <bootmenu enable='no'/>
	I0717 18:23:07.689445  411620 main.go:141] libmachine: (ha-445282-m03)   </os>
	I0717 18:23:07.689456  411620 main.go:141] libmachine: (ha-445282-m03)   <devices>
	I0717 18:23:07.689467  411620 main.go:141] libmachine: (ha-445282-m03)     <disk type='file' device='cdrom'>
	I0717 18:23:07.689484  411620 main.go:141] libmachine: (ha-445282-m03)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/boot2docker.iso'/>
	I0717 18:23:07.689499  411620 main.go:141] libmachine: (ha-445282-m03)       <target dev='hdc' bus='scsi'/>
	I0717 18:23:07.689515  411620 main.go:141] libmachine: (ha-445282-m03)       <readonly/>
	I0717 18:23:07.689524  411620 main.go:141] libmachine: (ha-445282-m03)     </disk>
	I0717 18:23:07.689534  411620 main.go:141] libmachine: (ha-445282-m03)     <disk type='file' device='disk'>
	I0717 18:23:07.689547  411620 main.go:141] libmachine: (ha-445282-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:23:07.689560  411620 main.go:141] libmachine: (ha-445282-m03)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/ha-445282-m03.rawdisk'/>
	I0717 18:23:07.689568  411620 main.go:141] libmachine: (ha-445282-m03)       <target dev='hda' bus='virtio'/>
	I0717 18:23:07.689596  411620 main.go:141] libmachine: (ha-445282-m03)     </disk>
	I0717 18:23:07.689619  411620 main.go:141] libmachine: (ha-445282-m03)     <interface type='network'>
	I0717 18:23:07.689633  411620 main.go:141] libmachine: (ha-445282-m03)       <source network='mk-ha-445282'/>
	I0717 18:23:07.689641  411620 main.go:141] libmachine: (ha-445282-m03)       <model type='virtio'/>
	I0717 18:23:07.689653  411620 main.go:141] libmachine: (ha-445282-m03)     </interface>
	I0717 18:23:07.689662  411620 main.go:141] libmachine: (ha-445282-m03)     <interface type='network'>
	I0717 18:23:07.689675  411620 main.go:141] libmachine: (ha-445282-m03)       <source network='default'/>
	I0717 18:23:07.689690  411620 main.go:141] libmachine: (ha-445282-m03)       <model type='virtio'/>
	I0717 18:23:07.689714  411620 main.go:141] libmachine: (ha-445282-m03)     </interface>
	I0717 18:23:07.689733  411620 main.go:141] libmachine: (ha-445282-m03)     <serial type='pty'>
	I0717 18:23:07.689744  411620 main.go:141] libmachine: (ha-445282-m03)       <target port='0'/>
	I0717 18:23:07.689754  411620 main.go:141] libmachine: (ha-445282-m03)     </serial>
	I0717 18:23:07.689765  411620 main.go:141] libmachine: (ha-445282-m03)     <console type='pty'>
	I0717 18:23:07.689775  411620 main.go:141] libmachine: (ha-445282-m03)       <target type='serial' port='0'/>
	I0717 18:23:07.689786  411620 main.go:141] libmachine: (ha-445282-m03)     </console>
	I0717 18:23:07.689796  411620 main.go:141] libmachine: (ha-445282-m03)     <rng model='virtio'>
	I0717 18:23:07.689813  411620 main.go:141] libmachine: (ha-445282-m03)       <backend model='random'>/dev/random</backend>
	I0717 18:23:07.689829  411620 main.go:141] libmachine: (ha-445282-m03)     </rng>
	I0717 18:23:07.689854  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689867  411620 main.go:141] libmachine: (ha-445282-m03)     
	I0717 18:23:07.689875  411620 main.go:141] libmachine: (ha-445282-m03)   </devices>
	I0717 18:23:07.689884  411620 main.go:141] libmachine: (ha-445282-m03) </domain>
	I0717 18:23:07.689893  411620 main.go:141] libmachine: (ha-445282-m03) 
	I0717 18:23:07.696417  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:36:6f:ce in network default
	I0717 18:23:07.697018  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring networks are active...
	I0717 18:23:07.697034  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:07.697788  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring network default is active
	I0717 18:23:07.698151  411620 main.go:141] libmachine: (ha-445282-m03) Ensuring network mk-ha-445282 is active
	I0717 18:23:07.698631  411620 main.go:141] libmachine: (ha-445282-m03) Getting domain xml...
	I0717 18:23:07.699442  411620 main.go:141] libmachine: (ha-445282-m03) Creating domain...
	I0717 18:23:08.918772  411620 main.go:141] libmachine: (ha-445282-m03) Waiting to get IP...
	I0717 18:23:08.919514  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:08.919957  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:08.919982  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:08.919927  412407 retry.go:31] will retry after 201.076635ms: waiting for machine to come up
	I0717 18:23:09.122189  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.122604  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.122651  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.122541  412407 retry.go:31] will retry after 360.345672ms: waiting for machine to come up
	I0717 18:23:09.483943  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.484376  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.484401  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.484346  412407 retry.go:31] will retry after 432.877971ms: waiting for machine to come up
	I0717 18:23:09.918549  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:09.919074  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:09.919111  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:09.919014  412407 retry.go:31] will retry after 482.54678ms: waiting for machine to come up
	I0717 18:23:10.402554  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:10.402929  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:10.402961  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:10.402874  412407 retry.go:31] will retry after 711.135179ms: waiting for machine to come up
	I0717 18:23:11.115357  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:11.115766  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:11.115806  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:11.115717  412407 retry.go:31] will retry after 696.130437ms: waiting for machine to come up
	I0717 18:23:11.813497  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:11.813963  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:11.813986  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:11.813907  412407 retry.go:31] will retry after 939.068462ms: waiting for machine to come up
	I0717 18:23:12.754574  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:12.755140  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:12.755193  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:12.755064  412407 retry.go:31] will retry after 1.438891186s: waiting for machine to come up
	I0717 18:23:14.195673  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:14.196027  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:14.196059  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:14.195974  412407 retry.go:31] will retry after 1.408170227s: waiting for machine to come up
	I0717 18:23:15.605804  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:15.606339  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:15.606368  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:15.606293  412407 retry.go:31] will retry after 1.419070639s: waiting for machine to come up
	I0717 18:23:17.027562  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:17.027966  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:17.027996  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:17.027912  412407 retry.go:31] will retry after 2.888338061s: waiting for machine to come up
	I0717 18:23:19.917660  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:19.918126  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:19.918154  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:19.918080  412407 retry.go:31] will retry after 2.69794922s: waiting for machine to come up
	I0717 18:23:22.617809  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:22.618152  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:22.618176  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:22.618109  412407 retry.go:31] will retry after 3.62794328s: waiting for machine to come up
	I0717 18:23:26.249574  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:26.249983  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find current IP address of domain ha-445282-m03 in network mk-ha-445282
	I0717 18:23:26.250006  411620 main.go:141] libmachine: (ha-445282-m03) DBG | I0717 18:23:26.249927  412407 retry.go:31] will retry after 5.249456453s: waiting for machine to come up
	I0717 18:23:31.501601  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.502073  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has current primary IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.502103  411620 main.go:141] libmachine: (ha-445282-m03) Found IP for machine: 192.168.39.214
	I0717 18:23:31.502118  411620 main.go:141] libmachine: (ha-445282-m03) Reserving static IP address...
	I0717 18:23:31.502477  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find host DHCP lease matching {name: "ha-445282-m03", mac: "52:54:00:da:b1:51", ip: "192.168.39.214"} in network mk-ha-445282
	I0717 18:23:31.574365  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Getting to WaitForSSH function...
	I0717 18:23:31.574400  411620 main.go:141] libmachine: (ha-445282-m03) Reserved static IP address: 192.168.39.214
	I0717 18:23:31.574414  411620 main.go:141] libmachine: (ha-445282-m03) Waiting for SSH to be available...
	I0717 18:23:31.577012  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:31.577401  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282
	I0717 18:23:31.577429  411620 main.go:141] libmachine: (ha-445282-m03) DBG | unable to find defined IP address of network mk-ha-445282 interface with MAC address 52:54:00:da:b1:51
	I0717 18:23:31.577556  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH client type: external
	I0717 18:23:31.577582  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa (-rw-------)
	I0717 18:23:31.577656  411620 main.go:141] libmachine: (ha-445282-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:23:31.577680  411620 main.go:141] libmachine: (ha-445282-m03) DBG | About to run SSH command:
	I0717 18:23:31.577695  411620 main.go:141] libmachine: (ha-445282-m03) DBG | exit 0
	I0717 18:23:31.581991  411620 main.go:141] libmachine: (ha-445282-m03) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:23:31.582017  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:23:31.582047  411620 main.go:141] libmachine: (ha-445282-m03) DBG | command : exit 0
	I0717 18:23:31.582070  411620 main.go:141] libmachine: (ha-445282-m03) DBG | err     : exit status 255
	I0717 18:23:31.582098  411620 main.go:141] libmachine: (ha-445282-m03) DBG | output  : 
	I0717 18:23:34.582251  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Getting to WaitForSSH function...
	I0717 18:23:34.584637  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.584990  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.585036  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.585145  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH client type: external
	I0717 18:23:34.585178  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa (-rw-------)
	I0717 18:23:34.585216  411620 main.go:141] libmachine: (ha-445282-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:23:34.585235  411620 main.go:141] libmachine: (ha-445282-m03) DBG | About to run SSH command:
	I0717 18:23:34.585258  411620 main.go:141] libmachine: (ha-445282-m03) DBG | exit 0
	I0717 18:23:34.720617  411620 main.go:141] libmachine: (ha-445282-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 18:23:34.720923  411620 main.go:141] libmachine: (ha-445282-m03) KVM machine creation complete!
	I0717 18:23:34.721281  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:34.721844  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:34.722049  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:34.722202  411620 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:23:34.722219  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:23:34.723492  411620 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:23:34.723510  411620 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:23:34.723518  411620 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:23:34.723533  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.725826  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.726198  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.726231  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.726348  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.726488  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.726646  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.726814  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.727011  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.727244  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.727257  411620 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:23:34.839878  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:34.839911  411620 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:23:34.839921  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.842511  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.842887  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.842911  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.843088  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.843268  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.843424  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.843581  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.843754  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.843923  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.843937  411620 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:23:34.961684  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:23:34.961763  411620 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:23:34.961771  411620 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:23:34.961782  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:34.962054  411620 buildroot.go:166] provisioning hostname "ha-445282-m03"
	I0717 18:23:34.962090  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:34.962341  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:34.965135  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.965566  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:34.965593  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:34.965771  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:34.965955  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.966129  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:34.966272  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:34.966433  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.966671  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:34.966692  411620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282-m03 && echo "ha-445282-m03" | sudo tee /etc/hostname
	I0717 18:23:35.095903  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282-m03
	
	I0717 18:23:35.095942  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.098557  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.098886  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.098922  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.099126  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.099336  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.099517  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.099682  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.099856  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:35.100071  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:35.100093  411620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:23:35.225688  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:35.225719  411620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:23:35.225738  411620 buildroot.go:174] setting up certificates
	I0717 18:23:35.225751  411620 provision.go:84] configureAuth start
	I0717 18:23:35.225764  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetMachineName
	I0717 18:23:35.226052  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:35.228671  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.228956  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.228984  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.229126  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.231500  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.231873  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.231899  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.232066  411620 provision.go:143] copyHostCerts
	I0717 18:23:35.232106  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:23:35.232148  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:23:35.232161  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:23:35.232245  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:23:35.232379  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:23:35.232405  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:23:35.232413  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:23:35.232455  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:23:35.232569  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:23:35.232597  411620 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:23:35.232603  411620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:23:35.232640  411620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:23:35.232730  411620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282-m03 san=[127.0.0.1 192.168.39.214 ha-445282-m03 localhost minikube]
	I0717 18:23:35.441554  411620 provision.go:177] copyRemoteCerts
	I0717 18:23:35.441634  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:23:35.441682  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.444232  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.444596  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.444633  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.444869  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.445123  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.445281  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.445410  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:35.530710  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:23:35.530818  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:23:35.556555  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:23:35.556642  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:23:35.583020  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:23:35.583101  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:23:35.608000  411620 provision.go:87] duration metric: took 382.235848ms to configureAuth
	I0717 18:23:35.608030  411620 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:23:35.608241  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:35.608314  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.611002  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.611386  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.611417  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.611570  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.611813  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.612041  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.612199  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.612350  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:35.612576  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:35.612596  411620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:23:35.886127  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:23:35.886172  411620 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:23:35.886183  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetURL
	I0717 18:23:35.887590  411620 main.go:141] libmachine: (ha-445282-m03) DBG | Using libvirt version 6000000
	I0717 18:23:35.889859  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.890222  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.890255  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.890372  411620 main.go:141] libmachine: Docker is up and running!
	I0717 18:23:35.890388  411620 main.go:141] libmachine: Reticulating splines...
	I0717 18:23:35.890398  411620 client.go:171] duration metric: took 28.624547488s to LocalClient.Create
	I0717 18:23:35.890427  411620 start.go:167] duration metric: took 28.624622446s to libmachine.API.Create "ha-445282"
	I0717 18:23:35.890440  411620 start.go:293] postStartSetup for "ha-445282-m03" (driver="kvm2")
	I0717 18:23:35.890455  411620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:23:35.890491  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:35.890754  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:23:35.890776  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:35.892685  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.893019  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:35.893045  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:35.893179  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:35.893376  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:35.893559  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:35.893722  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:35.979823  411620 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:23:35.984380  411620 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:23:35.984406  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:23:35.984471  411620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:23:35.984588  411620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:23:35.984598  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:23:35.984689  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:23:35.994509  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:23:36.020925  411620 start.go:296] duration metric: took 130.467328ms for postStartSetup
	I0717 18:23:36.021000  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetConfigRaw
	I0717 18:23:36.021689  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:36.024364  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.024740  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.024763  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.025035  411620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:23:36.025250  411620 start.go:128] duration metric: took 28.779273648s to createHost
	I0717 18:23:36.025278  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:36.027479  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.027855  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.027882  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.028023  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.028204  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.028355  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.028545  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.028700  411620 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:36.028908  411620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0717 18:23:36.028923  411620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:23:36.145672  411620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240616.127894753
	
	I0717 18:23:36.145710  411620 fix.go:216] guest clock: 1721240616.127894753
	I0717 18:23:36.145720  411620 fix.go:229] Guest: 2024-07-17 18:23:36.127894753 +0000 UTC Remote: 2024-07-17 18:23:36.025262913 +0000 UTC m=+158.624940901 (delta=102.63184ms)
	I0717 18:23:36.145744  411620 fix.go:200] guest clock delta is within tolerance: 102.63184ms
	I0717 18:23:36.145750  411620 start.go:83] releasing machines lock for "ha-445282-m03", held for 28.899944415s
	I0717 18:23:36.145779  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.146142  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:36.148785  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.149154  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.149188  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.151822  411620 out.go:177] * Found network options:
	I0717 18:23:36.153314  411620 out.go:177]   - NO_PROXY=192.168.39.147,192.168.39.198
	W0717 18:23:36.154591  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 18:23:36.154611  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:23:36.154627  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155321  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155552  411620 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:23:36.155639  411620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:23:36.155689  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	W0717 18:23:36.155809  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 18:23:36.155833  411620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 18:23:36.155911  411620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:23:36.155932  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:23:36.158623  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.158789  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159055  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.159084  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159224  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:36.159251  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.159258  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:36.159387  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:23:36.159470  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.159539  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:23:36.159609  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.159661  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:23:36.159725  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:36.159761  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:23:36.400733  411620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:23:36.406828  411620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:23:36.406914  411620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:23:36.423355  411620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:23:36.423381  411620 start.go:495] detecting cgroup driver to use...
	I0717 18:23:36.423454  411620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:23:36.439909  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:23:36.454185  411620 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:23:36.454250  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:23:36.468126  411620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:23:36.481535  411620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:23:36.596112  411620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:23:36.749997  411620 docker.go:233] disabling docker service ...
	I0717 18:23:36.750085  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:23:36.764921  411620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:23:36.779059  411620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:23:36.915600  411620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:23:37.026893  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:23:37.042207  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:23:37.061833  411620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:23:37.061917  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.073663  411620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:23:37.073732  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.085373  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.096230  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.107687  411620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:23:37.119064  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.130276  411620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.148769  411620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:37.159195  411620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:23:37.169178  411620 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:23:37.169235  411620 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:23:37.183378  411620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:23:37.192909  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:37.304732  411620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:23:37.451054  411620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:23:37.451138  411620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:23:37.456509  411620 start.go:563] Will wait 60s for crictl version
	I0717 18:23:37.456565  411620 ssh_runner.go:195] Run: which crictl
	I0717 18:23:37.460458  411620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:23:37.507517  411620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:23:37.507597  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:23:37.538306  411620 ssh_runner.go:195] Run: crio --version
	I0717 18:23:37.573280  411620 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:23:37.574440  411620 out.go:177]   - env NO_PROXY=192.168.39.147
	I0717 18:23:37.575673  411620 out.go:177]   - env NO_PROXY=192.168.39.147,192.168.39.198
	I0717 18:23:37.576672  411620 main.go:141] libmachine: (ha-445282-m03) Calling .GetIP
	I0717 18:23:37.579447  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:37.579942  411620 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:23:37.579977  411620 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:23:37.580196  411620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:23:37.584592  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:37.597296  411620 mustload.go:65] Loading cluster: ha-445282
	I0717 18:23:37.597507  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:37.597758  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:37.597801  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:37.613675  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38415
	I0717 18:23:37.614095  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:37.614531  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:37.614559  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:37.614892  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:37.615095  411620 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:23:37.616611  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:23:37.616934  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:37.616968  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:37.631684  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I0717 18:23:37.632122  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:37.632615  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:37.632639  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:37.632937  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:37.633141  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:23:37.633320  411620 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.214
	I0717 18:23:37.633334  411620 certs.go:194] generating shared ca certs ...
	I0717 18:23:37.633357  411620 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:37.633494  411620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:23:37.633529  411620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:23:37.633538  411620 certs.go:256] generating profile certs ...
	I0717 18:23:37.633608  411620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:23:37.633638  411620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2
	I0717 18:23:37.633653  411620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.214 192.168.39.254]
	I0717 18:23:38.109453  411620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 ...
	I0717 18:23:38.109485  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2: {Name:mkdb824e5b55da3266aa6f37148aafce183da162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:38.109692  411620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2 ...
	I0717 18:23:38.109712  411620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2: {Name:mk56670ee8ee75e573097f8cc3976a91e07aaece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:38.109820  411620 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.82168af2 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:23:38.109969  411620 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.82168af2 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:23:38.110131  411620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:23:38.110154  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:23:38.110173  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:23:38.110192  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:23:38.110210  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:23:38.110228  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:23:38.110245  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:23:38.110262  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:23:38.110279  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:23:38.110343  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:23:38.110382  411620 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:23:38.110394  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:23:38.110427  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:23:38.110459  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:23:38.110490  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:23:38.110542  411620 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:23:38.110580  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.110609  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.110627  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.110671  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:23:38.114085  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:38.114566  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:23:38.114597  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:38.114810  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:23:38.115044  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:23:38.115219  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:23:38.115365  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:23:38.188927  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 18:23:38.194366  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 18:23:38.206584  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 18:23:38.211291  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 18:23:38.221523  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 18:23:38.225778  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 18:23:38.236121  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 18:23:38.240239  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 18:23:38.251927  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 18:23:38.256162  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 18:23:38.266944  411620 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 18:23:38.271768  411620 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 18:23:38.282802  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:23:38.308765  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:23:38.334255  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:23:38.359295  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:23:38.383022  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 18:23:38.410871  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:23:38.435726  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:23:38.461125  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:23:38.485187  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:23:38.510887  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:23:38.536966  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:23:38.563106  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 18:23:38.580790  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 18:23:38.598393  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 18:23:38.616059  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 18:23:38.633015  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 18:23:38.649426  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 18:23:38.666226  411620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 18:23:38.683149  411620 ssh_runner.go:195] Run: openssl version
	I0717 18:23:38.689111  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:23:38.701073  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.705929  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.705999  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:23:38.712084  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:23:38.722985  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:23:38.734081  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.738843  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.738901  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:23:38.744576  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:23:38.755741  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:23:38.766405  411620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.771070  411620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.771119  411620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:38.777098  411620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:23:38.787460  411620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:23:38.791509  411620 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:23:38.791566  411620 kubeadm.go:934] updating node {m03 192.168.39.214 8443 v1.30.2 crio true true} ...
	I0717 18:23:38.791711  411620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:23:38.791742  411620 kube-vip.go:115] generating kube-vip config ...
	I0717 18:23:38.791777  411620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:23:38.807319  411620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:23:38.807395  411620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:23:38.807454  411620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:23:38.818576  411620 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 18:23:38.818639  411620 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 18:23:38.828511  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 18:23:38.828542  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:23:38.828548  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 18:23:38.828573  411620 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 18:23:38.828593  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:23:38.828595  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:23:38.828622  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 18:23:38.828653  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 18:23:38.843334  411620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:23:38.843355  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 18:23:38.843374  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 18:23:38.843419  411620 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 18:23:38.843456  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 18:23:38.843486  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 18:23:38.859958  411620 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 18:23:38.860016  411620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 18:23:39.759339  411620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 18:23:39.769905  411620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 18:23:39.788059  411620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:23:39.804267  411620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:23:39.820446  411620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:23:39.824470  411620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:39.836911  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:39.959606  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:23:39.977993  411620 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:23:39.978393  411620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:39.978448  411620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:39.994038  411620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0717 18:23:39.994617  411620 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:39.995123  411620 main.go:141] libmachine: Using API Version  1
	I0717 18:23:39.995147  411620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:39.995517  411620 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:39.995715  411620 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:23:39.995910  411620 start.go:317] joinCluster: &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:23:39.996068  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 18:23:39.996089  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:23:39.999078  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:39.999597  411620 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:23:39.999626  411620 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:23:39.999780  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:23:39.999974  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:23:40.000144  411620 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:23:40.000299  411620 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:23:40.173669  411620 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:23:40.173723  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsggqp.pqujppmj7tj4ps2p --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m03 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443"
	I0717 18:24:04.316247  411620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lsggqp.pqujppmj7tj4ps2p --discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-445282-m03 --control-plane --apiserver-advertise-address=192.168.39.214 --apiserver-bind-port=8443": (24.142488446s)
	I0717 18:24:04.316288  411620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 18:24:04.916010  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-445282-m03 minikube.k8s.io/updated_at=2024_07_17T18_24_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=ha-445282 minikube.k8s.io/primary=false
	I0717 18:24:05.051194  411620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-445282-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 18:24:05.196094  411620 start.go:319] duration metric: took 25.200179282s to joinCluster
	I0717 18:24:05.196187  411620 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:24:05.196562  411620 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:24:05.197861  411620 out.go:177] * Verifying Kubernetes components...
	I0717 18:24:05.199310  411620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:24:05.426302  411620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:24:05.444554  411620 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:24:05.444810  411620 kapi.go:59] client config for ha-445282: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 18:24:05.444878  411620 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.147:8443
	I0717 18:24:05.445090  411620 node_ready.go:35] waiting up to 6m0s for node "ha-445282-m03" to be "Ready" ...
	I0717 18:24:05.445180  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:05.445189  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:05.445197  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:05.445201  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:05.448758  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:05.945817  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:05.945851  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:05.945863  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:05.945868  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:05.950088  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:06.445797  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:06.445823  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:06.445835  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:06.445840  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:06.450734  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:06.945735  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:06.945766  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:06.945779  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:06.945787  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:06.949746  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:07.445759  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:07.445782  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:07.445790  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:07.445796  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:07.450492  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:07.451076  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:07.945782  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:07.945811  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:07.945829  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:07.945874  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:07.950594  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:08.446029  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:08.446056  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:08.446067  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:08.446072  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:08.449253  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:08.946045  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:08.946074  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:08.946085  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:08.946092  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:08.949575  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:09.445390  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:09.445416  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:09.445446  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:09.445455  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:09.451340  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:24:09.452026  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:09.945300  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:09.945324  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:09.945333  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:09.945339  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:09.948651  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:10.445299  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:10.445327  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:10.445336  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:10.445341  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:10.448853  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:10.946318  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:10.946341  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:10.946350  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:10.946354  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:10.950605  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:11.445437  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:11.445457  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:11.445465  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:11.445469  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:11.448314  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:11.945805  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:11.945833  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:11.945844  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:11.945852  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:11.949297  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:11.950044  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:12.445974  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:12.445995  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:12.446003  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:12.446008  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:12.449645  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:12.945772  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:12.945797  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:12.945805  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:12.945810  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:12.949538  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:13.445755  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:13.445783  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:13.445793  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:13.445800  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:13.449093  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:13.945786  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:13.945810  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:13.945819  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:13.945824  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:13.955336  411620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 18:24:13.955998  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:14.445729  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:14.445753  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:14.445761  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:14.445765  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:14.449626  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:14.945601  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:14.945624  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:14.945633  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:14.945637  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:14.949007  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:15.446240  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:15.446276  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:15.446288  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:15.446295  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:15.450690  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:15.945372  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:15.945405  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:15.945417  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:15.945447  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:15.949002  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:16.445892  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:16.445916  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:16.445924  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:16.445928  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:16.451015  411620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 18:24:16.452254  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:16.945603  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:16.945638  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:16.945645  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:16.945649  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:16.948855  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:17.445613  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:17.445645  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:17.445653  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:17.445658  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:17.449138  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:17.946299  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:17.946320  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:17.946328  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:17.946332  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:17.949583  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.446076  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:18.446099  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:18.446109  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:18.446116  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:18.449728  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.945959  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:18.945983  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:18.945992  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:18.945996  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:18.949235  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:18.950377  411620 node_ready.go:53] node "ha-445282-m03" has status "Ready":"False"
	I0717 18:24:19.445368  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:19.445393  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:19.445401  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:19.445406  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:19.448628  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:19.945566  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:19.945585  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:19.945594  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:19.945599  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:19.948591  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:20.445963  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:20.445985  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:20.445994  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:20.445998  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:20.449184  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:20.945346  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:20.945378  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:20.945390  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:20.945397  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:20.948588  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.445540  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.445566  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.445577  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.445582  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.448809  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.449350  411620 node_ready.go:49] node "ha-445282-m03" has status "Ready":"True"
	I0717 18:24:21.449369  411620 node_ready.go:38] duration metric: took 16.004266077s for node "ha-445282-m03" to be "Ready" ...
	I0717 18:24:21.449379  411620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:24:21.449444  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:21.449455  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.449463  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.449466  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.456554  411620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 18:24:21.463187  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.463285  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-28njs
	I0717 18:24:21.463297  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.463308  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.463317  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.466094  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.466745  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.466765  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.466773  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.466778  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.469116  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.469603  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.469624  411620 pod_ready.go:81] duration metric: took 6.413174ms for pod "coredns-7db6d8ff4d-28njs" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.469633  411620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.469679  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rzxbr
	I0717 18:24:21.469686  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.469693  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.469698  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.471997  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.472604  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.472619  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.472626  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.472630  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.474786  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.475349  411620 pod_ready.go:92] pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.475367  411620 pod_ready.go:81] duration metric: took 5.728266ms for pod "coredns-7db6d8ff4d-rzxbr" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.475378  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.475439  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282
	I0717 18:24:21.475449  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.475458  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.475468  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.477535  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.478072  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:21.478088  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.478097  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.478102  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.480010  411620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 18:24:21.480446  411620 pod_ready.go:92] pod "etcd-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.480462  411620 pod_ready.go:81] duration metric: took 5.076646ms for pod "etcd-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.480471  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.480563  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m02
	I0717 18:24:21.480574  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.480581  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.480585  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.482764  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.483334  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:21.483349  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.483356  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.483361  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.485850  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:21.486312  411620 pod_ready.go:92] pod "etcd-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.486331  411620 pod_ready.go:81] duration metric: took 5.85437ms for pod "etcd-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.486338  411620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.645659  411620 request.go:629] Waited for 159.250572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m03
	I0717 18:24:21.645933  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/etcd-ha-445282-m03
	I0717 18:24:21.645939  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.645948  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.645957  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.649393  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:21.846458  411620 request.go:629] Waited for 196.367585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.846529  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:21.846542  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:21.846553  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:21.846565  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:21.857374  411620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 18:24:21.859263  411620 pod_ready.go:92] pod "etcd-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:21.859285  411620 pod_ready.go:81] duration metric: took 372.93962ms for pod "etcd-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:21.859313  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.046604  411620 request.go:629] Waited for 187.17368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:24:22.046678  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282
	I0717 18:24:22.046685  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.046698  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.046706  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.049974  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.246176  411620 request.go:629] Waited for 195.358968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:22.246236  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:22.246241  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.246251  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.246256  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.249677  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.250186  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:22.250208  411620 pod_ready.go:81] duration metric: took 390.884341ms for pod "kube-apiserver-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.250218  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.445786  411620 request.go:629] Waited for 195.464948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:24:22.445864  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m02
	I0717 18:24:22.445874  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.445890  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.445897  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.449286  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.646397  411620 request.go:629] Waited for 196.159395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:22.646453  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:22.646457  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.646465  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.646468  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.649637  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:22.650129  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:22.650148  411620 pod_ready.go:81] duration metric: took 399.920158ms for pod "kube-apiserver-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.650158  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:22.846197  411620 request.go:629] Waited for 195.965297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m03
	I0717 18:24:22.846298  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282-m03
	I0717 18:24:22.846305  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:22.846314  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:22.846320  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:22.849544  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.046481  411620 request.go:629] Waited for 196.035999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:23.046541  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:23.046545  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.046553  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.046556  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.049743  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.050573  411620 pod_ready.go:92] pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.050590  411620 pod_ready.go:81] duration metric: took 400.426327ms for pod "kube-apiserver-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.050600  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.246181  411620 request.go:629] Waited for 195.488267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:24:23.246264  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282
	I0717 18:24:23.246272  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.246284  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.246294  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.250246  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.446240  411620 request.go:629] Waited for 195.35445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:23.446332  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:23.446344  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.446353  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.446362  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.449334  411620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 18:24:23.450071  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.450094  411620 pod_ready.go:81] duration metric: took 399.486233ms for pod "kube-controller-manager-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.450108  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.646580  411620 request.go:629] Waited for 196.393708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:24:23.646684  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m02
	I0717 18:24:23.646692  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.646703  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.646715  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.650140  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.846516  411620 request.go:629] Waited for 195.399684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:23.846600  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:23.846606  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:23.846614  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:23.846618  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:23.850347  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:23.850968  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:23.850988  411620 pod_ready.go:81] duration metric: took 400.873337ms for pod "kube-controller-manager-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:23.850999  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.046021  411620 request.go:629] Waited for 194.938571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m03
	I0717 18:24:24.046093  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-445282-m03
	I0717 18:24:24.046101  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.046110  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.046115  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.049580  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.245624  411620 request.go:629] Waited for 195.287009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:24.245688  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:24.245693  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.245700  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.245704  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.249120  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.249804  411620 pod_ready.go:92] pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:24.249824  411620 pod_ready.go:81] duration metric: took 398.817754ms for pod "kube-controller-manager-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.249838  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.445919  411620 request.go:629] Waited for 195.975163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:24:24.445996  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxmp8
	I0717 18:24:24.446003  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.446011  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.446017  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.449796  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.646104  411620 request.go:629] Waited for 195.35989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:24.646167  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:24.646172  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.646180  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.646184  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.649709  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:24.650323  411620 pod_ready.go:92] pod "kube-proxy-vxmp8" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:24.650344  411620 pod_ready.go:81] duration metric: took 400.498641ms for pod "kube-proxy-vxmp8" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.650358  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:24.846293  411620 request.go:629] Waited for 195.837794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:24:24.846409  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xs65r
	I0717 18:24:24.846420  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:24.846438  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:24.846448  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:24.849634  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.045764  411620 request.go:629] Waited for 195.397847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:25.045823  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:25.045828  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.045837  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.045841  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.049064  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.049800  411620 pod_ready.go:92] pod "kube-proxy-xs65r" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.049819  411620 pod_ready.go:81] duration metric: took 399.450493ms for pod "kube-proxy-xs65r" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.049829  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zb54p" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.245791  411620 request.go:629] Waited for 195.887447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zb54p
	I0717 18:24:25.245881  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zb54p
	I0717 18:24:25.245892  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.245903  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.245910  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.249711  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.446241  411620 request.go:629] Waited for 195.755955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:25.446304  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:25.446309  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.446317  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.446324  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.449659  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.450286  411620 pod_ready.go:92] pod "kube-proxy-zb54p" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.450305  411620 pod_ready.go:81] duration metric: took 400.470675ms for pod "kube-proxy-zb54p" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.450314  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.646534  411620 request.go:629] Waited for 196.092786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:24:25.646632  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282
	I0717 18:24:25.646644  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.646655  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.646665  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.650065  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.846125  411620 request.go:629] Waited for 195.372135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:25.846194  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282
	I0717 18:24:25.846204  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:25.846218  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:25.846228  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:25.849639  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:25.850231  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:25.850249  411620 pod_ready.go:81] duration metric: took 399.928986ms for pod "kube-scheduler-ha-445282" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:25.850260  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.046335  411620 request.go:629] Waited for 195.99919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:24:26.046402  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m02
	I0717 18:24:26.046408  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.046416  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.046421  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.049721  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.246000  411620 request.go:629] Waited for 195.358558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:26.246081  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m02
	I0717 18:24:26.246088  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.246096  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.246102  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.249505  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.250004  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:26.250026  411620 pod_ready.go:81] duration metric: took 399.755503ms for pod "kube-scheduler-ha-445282-m02" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.250040  411620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.446110  411620 request.go:629] Waited for 195.960662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m03
	I0717 18:24:26.446191  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-445282-m03
	I0717 18:24:26.446200  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.446212  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.446222  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.449554  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.645601  411620 request.go:629] Waited for 195.230272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:26.645666  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes/ha-445282-m03
	I0717 18:24:26.645672  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.645682  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.645687  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.648754  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:26.649370  411620 pod_ready.go:92] pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 18:24:26.649388  411620 pod_ready.go:81] duration metric: took 399.340756ms for pod "kube-scheduler-ha-445282-m03" in "kube-system" namespace to be "Ready" ...
	I0717 18:24:26.649401  411620 pod_ready.go:38] duration metric: took 5.200011858s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:24:26.649417  411620 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:24:26.649473  411620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:24:26.665361  411620 api_server.go:72] duration metric: took 21.469138503s to wait for apiserver process to appear ...
	I0717 18:24:26.665384  411620 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:24:26.665403  411620 api_server.go:253] Checking apiserver healthz at https://192.168.39.147:8443/healthz ...
	I0717 18:24:26.669685  411620 api_server.go:279] https://192.168.39.147:8443/healthz returned 200:
	ok
	I0717 18:24:26.669747  411620 round_trippers.go:463] GET https://192.168.39.147:8443/version
	I0717 18:24:26.669755  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.669763  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.669769  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.670788  411620 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 18:24:26.670863  411620 api_server.go:141] control plane version: v1.30.2
	I0717 18:24:26.670884  411620 api_server.go:131] duration metric: took 5.48806ms to wait for apiserver health ...
	I0717 18:24:26.670898  411620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:24:26.846316  411620 request.go:629] Waited for 175.328812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:26.846382  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:26.846387  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:26.846395  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:26.846402  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:26.853526  411620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 18:24:26.860951  411620 system_pods.go:59] 24 kube-system pods found
	I0717 18:24:26.860989  411620 system_pods.go:61] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:24:26.860995  411620 system_pods.go:61] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:24:26.861000  411620 system_pods.go:61] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:24:26.861005  411620 system_pods.go:61] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:24:26.861010  411620 system_pods.go:61] "etcd-ha-445282-m03" [9621969a-6d14-4d47-92b2-c5dc4a2ca531] Running
	I0717 18:24:26.861014  411620 system_pods.go:61] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:24:26.861020  411620 system_pods.go:61] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:24:26.861027  411620 system_pods.go:61] "kindnet-x62t5" [1045c2e4-d4c7-43be-8050-caed7eecc2a7] Running
	I0717 18:24:26.861036  411620 system_pods.go:61] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:24:26.861042  411620 system_pods.go:61] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:24:26.861048  411620 system_pods.go:61] "kube-apiserver-ha-445282-m03" [40ca072c-1516-4ba2-9224-35b7457e06eb] Running
	I0717 18:24:26.861054  411620 system_pods.go:61] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:24:26.861060  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:24:26.861066  411620 system_pods.go:61] "kube-controller-manager-ha-445282-m03" [438e8ce2-42b4-4ba1-8982-cc91043c6025] Running
	I0717 18:24:26.861074  411620 system_pods.go:61] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:24:26.861079  411620 system_pods.go:61] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:24:26.861087  411620 system_pods.go:61] "kube-proxy-zb54p" [4f525f13-19ee-4a9a-a898-3fc33539d368] Running
	I0717 18:24:26.861092  411620 system_pods.go:61] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:24:26.861098  411620 system_pods.go:61] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:24:26.861104  411620 system_pods.go:61] "kube-scheduler-ha-445282-m03" [efca200e-c509-4fe1-aae4-35805a8a1b79] Running
	I0717 18:24:26.861109  411620 system_pods.go:61] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:24:26.861114  411620 system_pods.go:61] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:24:26.861121  411620 system_pods.go:61] "kube-vip-ha-445282-m03" [11e685c6-4c65-4e8d-9d63-929d7efb2140] Running
	I0717 18:24:26.861125  411620 system_pods.go:61] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:24:26.861134  411620 system_pods.go:74] duration metric: took 190.225321ms to wait for pod list to return data ...
	I0717 18:24:26.861149  411620 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:24:27.046590  411620 request.go:629] Waited for 185.348094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:24:27.046673  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:24:27.046682  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.046692  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.046704  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.050119  411620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 18:24:27.050236  411620 default_sa.go:45] found service account: "default"
	I0717 18:24:27.050250  411620 default_sa.go:55] duration metric: took 189.094114ms for default service account to be created ...
	I0717 18:24:27.050258  411620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:24:27.245634  411620 request.go:629] Waited for 195.301482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:27.245718  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/namespaces/kube-system/pods
	I0717 18:24:27.245724  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.245730  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.245736  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.252192  411620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 18:24:27.258652  411620 system_pods.go:86] 24 kube-system pods found
	I0717 18:24:27.258681  411620 system_pods.go:89] "coredns-7db6d8ff4d-28njs" [1e8f2f11-c89c-42ae-829a-e2cf1dea11b6] Running
	I0717 18:24:27.258687  411620 system_pods.go:89] "coredns-7db6d8ff4d-rzxbr" [9630d87d-3470-4675-9b3c-a10ff614f5e1] Running
	I0717 18:24:27.258691  411620 system_pods.go:89] "etcd-ha-445282" [0575d3f5-82a8-4bfd-9386-00d014e19119] Running
	I0717 18:24:27.258695  411620 system_pods.go:89] "etcd-ha-445282-m02" [eb066c71-5455-4bd5-b5c0-f7858661506b] Running
	I0717 18:24:27.258700  411620 system_pods.go:89] "etcd-ha-445282-m03" [9621969a-6d14-4d47-92b2-c5dc4a2ca531] Running
	I0717 18:24:27.258705  411620 system_pods.go:89] "kindnet-75gcw" [872c1132-e584-47c1-a873-74615d52511b] Running
	I0717 18:24:27.258711  411620 system_pods.go:89] "kindnet-mdqdz" [fdb368a3-7d1c-4073-a351-85d6c92a27af] Running
	I0717 18:24:27.258717  411620 system_pods.go:89] "kindnet-x62t5" [1045c2e4-d4c7-43be-8050-caed7eecc2a7] Running
	I0717 18:24:27.258722  411620 system_pods.go:89] "kube-apiserver-ha-445282" [d7814ca7-0944-4cac-8438-53640be6f85c] Running
	I0717 18:24:27.258730  411620 system_pods.go:89] "kube-apiserver-ha-445282-m02" [1014746f-377d-455f-b86b-66e4ee3aaddf] Running
	I0717 18:24:27.258737  411620 system_pods.go:89] "kube-apiserver-ha-445282-m03" [40ca072c-1516-4ba2-9224-35b7457e06eb] Running
	I0717 18:24:27.258745  411620 system_pods.go:89] "kube-controller-manager-ha-445282" [4b62f365-b4c2-46fd-9ca6-6c18f0205159] Running
	I0717 18:24:27.258756  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m02" [f7ef8ac1-6f28-49f2-95a3-9224907eaf2b] Running
	I0717 18:24:27.258762  411620 system_pods.go:89] "kube-controller-manager-ha-445282-m03" [438e8ce2-42b4-4ba1-8982-cc91043c6025] Running
	I0717 18:24:27.258768  411620 system_pods.go:89] "kube-proxy-vxmp8" [cca555da-b93a-430c-8fbe-7e732af65a3a] Running
	I0717 18:24:27.258772  411620 system_pods.go:89] "kube-proxy-xs65r" [f0a65765-1826-47e6-ab8d-78ae6bb3abca] Running
	I0717 18:24:27.258777  411620 system_pods.go:89] "kube-proxy-zb54p" [4f525f13-19ee-4a9a-a898-3fc33539d368] Running
	I0717 18:24:27.258781  411620 system_pods.go:89] "kube-scheduler-ha-445282" [ec2ecb84-3559-430f-815c-a2d2ccbb197b] Running
	I0717 18:24:27.258786  411620 system_pods.go:89] "kube-scheduler-ha-445282-m02" [71380e3c-2e00-4bd3-adf8-06af51f3bb49] Running
	I0717 18:24:27.258789  411620 system_pods.go:89] "kube-scheduler-ha-445282-m03" [efca200e-c509-4fe1-aae4-35805a8a1b79] Running
	I0717 18:24:27.258794  411620 system_pods.go:89] "kube-vip-ha-445282" [ca5bcedd-e43a-4711-bdfc-dc1c2c524d86] Running
	I0717 18:24:27.258798  411620 system_pods.go:89] "kube-vip-ha-445282-m02" [53798037-a734-43b8-be52-834446680e9a] Running
	I0717 18:24:27.258802  411620 system_pods.go:89] "kube-vip-ha-445282-m03" [11e685c6-4c65-4e8d-9d63-929d7efb2140] Running
	I0717 18:24:27.258806  411620 system_pods.go:89] "storage-provisioner" [ae931c3b-8935-481d-bef4-0b05dad8c915] Running
	I0717 18:24:27.258812  411620 system_pods.go:126] duration metric: took 208.548733ms to wait for k8s-apps to be running ...
	I0717 18:24:27.258823  411620 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:24:27.258884  411620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:24:27.274922  411620 system_svc.go:56] duration metric: took 16.088371ms WaitForService to wait for kubelet
	I0717 18:24:27.274955  411620 kubeadm.go:582] duration metric: took 22.078733901s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:24:27.274984  411620 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:24:27.446213  411620 request.go:629] Waited for 171.128406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.147:8443/api/v1/nodes
	I0717 18:24:27.446399  411620 round_trippers.go:463] GET https://192.168.39.147:8443/api/v1/nodes
	I0717 18:24:27.446424  411620 round_trippers.go:469] Request Headers:
	I0717 18:24:27.446436  411620 round_trippers.go:473]     Accept: application/json, */*
	I0717 18:24:27.446441  411620 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 18:24:27.450859  411620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 18:24:27.452709  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452729  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452741  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452745  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452748  411620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:24:27.452751  411620 node_conditions.go:123] node cpu capacity is 2
	I0717 18:24:27.452755  411620 node_conditions.go:105] duration metric: took 177.766473ms to run NodePressure ...
	I0717 18:24:27.452766  411620 start.go:241] waiting for startup goroutines ...
	I0717 18:24:27.452796  411620 start.go:255] writing updated cluster config ...
	I0717 18:24:27.453063  411620 ssh_runner.go:195] Run: rm -f paused
	I0717 18:24:27.505257  411620 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:24:27.507135  411620 out.go:177] * Done! kubectl is now configured to use "ha-445282" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.475653657Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=455cccac-6a8d-4f89-9adc-1472d6b53ca0 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.476864636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54bb76b4-3fdb-4c2c-bb83-697ab406ad3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.477604802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240951477578612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54bb76b4-3fdb-4c2c-bb83-697ab406ad3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.478225502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1d68b77-87fc-42ec-8925-e73f7042b831 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.478285990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1d68b77-87fc-42ec-8925-e73f7042b831 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.478951582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1d68b77-87fc-42ec-8925-e73f7042b831 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.524815693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2bd32481-358d-4afd-94c3-101e556251f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.524929844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2bd32481-358d-4afd-94c3-101e556251f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.526849126Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=204a07b9-a3d5-42d0-b2ba-2ea56007ae91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.527509487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240951527484734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=204a07b9-a3d5-42d0-b2ba-2ea56007ae91 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.528285039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6080cfaa-c222-46b9-af45-300b007b4289 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.528340569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6080cfaa-c222-46b9-af45-300b007b4289 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.528646238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6080cfaa-c222-46b9-af45-300b007b4289 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.566319588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9217b035-629f-47e3-82f3-765fb8b28361 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.566400849Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9217b035-629f-47e3-82f3-765fb8b28361 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.569102161Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd765c29-6b05-4faf-9737-73d840b8eac9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.569702307Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mcsw8,Uid:727368ca-3135-44f6-93b1-5cfb12476236,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240668737532527,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:24:28.412126588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ae931c3b-8935-481d-bef4-0b05dad8c915,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1721240530474964736,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T18:22:10.152493513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-28njs,Uid:1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240530457299416,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:22:10.144301013Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzxbr,Uid:9630d87d-3470-4675-9b3c-a10ff614f5e1,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1721240530456333308,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:22:10.140213967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&PodSandboxMetadata{Name:kindnet-75gcw,Uid:872c1132-e584-47c1-a873-74615d52511b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240514546899080,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:21:52.727502293Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-vxmp8,Uid:cca555da-b93a-430c-8fbe-7e732af65a3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240514538469874,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:21:52.727395024Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-445282,Uid:7dd8913571a8d10ff9e0c918f975230e,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1721240493204890447,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{kubernetes.io/config.hash: 7dd8913571a8d10ff9e0c918f975230e,kubernetes.io/config.seen: 2024-07-17T18:21:32.718381791Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-445282,Uid:b71086ebffd4e15bc7c5f6152b697200,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240493200699421,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: b71086ebffd4e15bc7c5f6152b697200,kubernetes.io/config.seen: 2024-07-17T18:21:32.718380032Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-445282,Uid:058431b563c109d1ce3751345314cdc4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240493192782752,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.147:8443,kubernetes.io/config.hash: 058431b563c109d1ce3751345314cdc4,kubernetes.io/config.seen: 2024-07-17T18:21:32.718378927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e
1560374d9f2397db27fe0fb,Metadata:&PodSandboxMetadata{Name:etcd-ha-445282,Uid:5611ca3ae268bab43701867e47a0324e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240493181108484,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.147:2379,kubernetes.io/config.hash: 5611ca3ae268bab43701867e47a0324e,kubernetes.io/config.seen: 2024-07-17T18:21:32.718375330Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-445282,Uid:8d0e44b0150b917f8f54d6a478ddc641,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721240493180322671,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d0e44b0150b917f8f54d6a478ddc641,kubernetes.io/config.seen: 2024-07-17T18:21:32.718381017Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dd765c29-6b05-4faf-9737-73d840b8eac9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.570302056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bb0d359-27bc-476c-b5c1-5519e668b00c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.570372392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bb0d359-27bc-476c-b5c1-5519e668b00c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.570397054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06179344-9a08-424c-a0b0-48cbe584afee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.570682113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bb0d359-27bc-476c-b5c1-5519e668b00c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.571121613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240951571102530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06179344-9a08-424c-a0b0-48cbe584afee name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.572332637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7923e7c9-3410-43b7-a755-b0a764ef7856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.572379218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7923e7c9-3410-43b7-a755-b0a764ef7856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:29:11 ha-445282 crio[683]: time="2024-07-17 18:29:11.572776674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721240671679698693,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435,PodSandboxId:5dcf3fb8a7f3f5d54ff6c76abb70ec4580f6cebcf52b0c827811568135666097,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240530760249768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530705259652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240530698723869,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c8
9c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CON
TAINER_RUNNING,CreatedAt:1721240518897882747,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721240514
654026257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1,PodSandboxId:180a789b714bd39d990f20ae64f2877f639a08c6c0a2ebed663b786b4155f211,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172124049640
4937495,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dd8913571a8d10ff9e0c918f975230e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721240493481006900,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a,PodSandboxId:e46a9bac3bd93e20e4e77a2402e91cab0878f1ee6658c9be0c3f8be2e17f1d93,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721240493465205078,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b,PodSandboxId:c34972633700db086b85419fb496ea24fc7b4fd5034b94f01d97e96af0978505,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721240493440874611,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721240493390934896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7923e7c9-3410-43b7-a755-b0a764ef7856 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	46bb59b8c88a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   c6775eb0d5980       busybox-fc5497c4f-mcsw8
	54ce94edc9034       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   5dcf3fb8a7f3f       storage-provisioner
	408ccf9c4f5cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7904758cf99a7       coredns-7db6d8ff4d-rzxbr
	9c8f03436294a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   1b4104fef2aba       coredns-7db6d8ff4d-28njs
	6e8619164a43b       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    7 minutes ago       Running             kindnet-cni               0                   ea48366339cf7       kindnet-75gcw
	ab95f55f84d8d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago       Running             kube-proxy                0                   9798b06dd09f9       kube-proxy-vxmp8
	ac29ebebce093       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   180a789b714bd       kube-vip-ha-445282
	09fdf7de5bf8c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   d2f7bf6b169d4       etcd-ha-445282
	608260c5da265       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago       Running             kube-apiserver            0                   e46a9bac3bd93       kube-apiserver-ha-445282
	f910525936daa       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago       Running             kube-controller-manager   0                   c34972633700d       kube-controller-manager-ha-445282
	585303a41caea       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago       Running             kube-scheduler            0                   b5b8e1d746c8d       kube-scheduler-ha-445282
	
	
	==> coredns [408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570] <==
	[INFO] 10.244.1.2:57634 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274634s
	[INFO] 10.244.1.2:60887 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000345633s
	[INFO] 10.244.1.2:46939 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198474s
	[INFO] 10.244.1.2:42067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193888s
	[INFO] 10.244.1.2:38612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103227s
	[INFO] 10.244.0.4:44523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001703135s
	[INFO] 10.244.0.4:59477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107361s
	[INFO] 10.244.0.4:56198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108839s
	[INFO] 10.244.0.4:38398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004501s
	[INFO] 10.244.0.4:41070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061061s
	[INFO] 10.244.2.2:37193 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186169s
	[INFO] 10.244.2.2:47175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259008s
	[INFO] 10.244.2.2:43118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117844s
	[INFO] 10.244.2.2:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104875s
	[INFO] 10.244.1.2:43839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163961s
	[INFO] 10.244.1.2:57262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014754s
	[INFO] 10.244.1.2:59861 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089161s
	[INFO] 10.244.0.4:35507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101753s
	[INFO] 10.244.0.4:50990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048865s
	[INFO] 10.244.2.2:35692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101106s
	[INFO] 10.244.2.2:47438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106571s
	[INFO] 10.244.0.4:37290 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140704s
	[INFO] 10.244.0.4:37755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145358s
	[INFO] 10.244.2.2:58729 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097845s
	[INFO] 10.244.2.2:47405 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008526s
	
	
	==> coredns [9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1] <==
	[INFO] 10.244.1.2:35140 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013649006s
	[INFO] 10.244.0.4:49386 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00164129s
	[INFO] 10.244.1.2:55522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193989s
	[INFO] 10.244.1.2:35380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000257865s
	[INFO] 10.244.1.2:59627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.080250702s
	[INFO] 10.244.0.4:51929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136107s
	[INFO] 10.244.0.4:36818 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096811s
	[INFO] 10.244.0.4:42583 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001301585s
	[INFO] 10.244.2.2:59932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203977s
	[INFO] 10.244.2.2:50906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207365s
	[INFO] 10.244.2.2:41438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168363s
	[INFO] 10.244.2.2:47479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170645s
	[INFO] 10.244.1.2:54595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208251s
	[INFO] 10.244.0.4:34251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081496s
	[INFO] 10.244.0.4:35201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063768s
	[INFO] 10.244.2.2:50926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154679s
	[INFO] 10.244.2.2:39243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122767s
	[INFO] 10.244.1.2:50770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014514s
	[INFO] 10.244.1.2:37706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166071s
	[INFO] 10.244.1.2:53197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306441s
	[INFO] 10.244.1.2:34142 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128366s
	[INFO] 10.244.0.4:60617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102661s
	[INFO] 10.244.0.4:54474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060033s
	[INFO] 10.244.2.2:50977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014662s
	[INFO] 10.244.2.2:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013261s
	
	
	==> describe nodes <==
	Name:               ha-445282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:21:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:29:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:24:43 +0000   Wed, 17 Jul 2024 18:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-445282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1ea799c4fd84c5c8c95385b6a2349f7
	  System UUID:                d1ea799c-4fd8-4c5c-8c95-385b6a2349f7
	  Boot ID:                    58e8f531-06d1-4b66-9fa8-93cd9d417ce6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mcsw8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 coredns-7db6d8ff4d-28njs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 coredns-7db6d8ff4d-rzxbr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 etcd-ha-445282                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m32s
	  kube-system                 kindnet-75gcw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m19s
	  kube-system                 kube-apiserver-ha-445282             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-controller-manager-ha-445282    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-vxmp8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-scheduler-ha-445282             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 kube-vip-ha-445282                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m16s  kube-proxy       
	  Normal  Starting                 7m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m32s  kubelet          Node ha-445282 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s  kubelet          Node ha-445282 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s  kubelet          Node ha-445282 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m20s  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal  NodeReady                7m1s   kubelet          Node ha-445282 status is now: NodeReady
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal  RegisteredNode           4m53s  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	
	
	Name:               ha-445282-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:22:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:25:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 18:24:45 +0000   Wed, 17 Jul 2024 18:26:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-445282-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dee104babdb45fe968765f68a06ccd6
	  System UUID:                5dee104b-abdb-45fe-9687-65f68a06ccd6
	  Boot ID:                    13d26f90-4583-404e-9e97-b1d855b45a85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blwvw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-445282-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                 kindnet-mdqdz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m29s
	  kube-system                 kube-apiserver-ha-445282-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-controller-manager-ha-445282-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 kube-proxy-xs65r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m29s
	  kube-system                 kube-scheduler-ha-445282-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-vip-ha-445282-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m24s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s (x8 over 6m29s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s (x8 over 6m29s)  kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s (x7 over 6m29s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  NodeNotReady             2m53s                  node-controller  Node ha-445282-m02 status is now: NodeNotReady
	
	
	Name:               ha-445282-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_24_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:29:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:25:02 +0000   Wed, 17 Jul 2024 18:24:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-445282-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	System Info:
	  Machine ID:                 a37bfc2af28c4be69cd12d6b627c60fb
	  System UUID:                a37bfc2a-f28c-4be6-9cd1-2d6b627c60fb
	  Boot ID:                    f7c1c0dd-d81b-4bd7-a98f-9c81b86ac22c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjpp8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-445282-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-x62t5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m10s
	  kube-system                 kube-apiserver-ha-445282-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-controller-manager-ha-445282-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-zb54p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-ha-445282-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-445282-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m7s                   kube-proxy       
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-445282-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	
	
	Name:               ha-445282-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_25_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:29:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:25:35 +0000   Wed, 17 Jul 2024 18:25:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-445282-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55cbb1c4afb849b39c587987c52eb826
	  System UUID:                55cbb1c4-afb8-49b3-9c58-7987c52eb826
	  Boot ID:                    11204469-5192-445f-805c-e983f155f9ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nx7rb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-proxy-jstdw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x2 over 4m6s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x2 over 4m6s)  kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x2 over 4m6s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal  NodeReady                3m46s                kubelet          Node ha-445282-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050023] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040164] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.526561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.440415] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.613050] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.891308] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.193800] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.120214] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.274662] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.047178] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.805512] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055406] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996103] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.082270] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.197381] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.192890] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 18:22] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943] <==
	{"level":"warn","ts":"2024-07-17T18:29:11.831061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.836501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.840925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.844582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.850971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.857265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.863265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.869146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.871691Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.891024Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.894116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.8976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.90371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.907871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.91117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.919603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.924081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.930247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.938113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.941329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.944322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.950309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.959084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.966848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T18:29:11.994784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c194f0f1585e7a7d","from":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:29:12 up 8 min,  0 users,  load average: 0.28, 0.28, 0.15
	Linux ha-445282 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22] <==
	I0717 18:28:39.998573       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:28:50.001898       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:28:50.002079       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:28:50.002332       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:28:50.002383       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:28:50.002580       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:28:50.002614       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:28:50.002707       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:28:50.002729       1 main.go:303] handling current node
	I0717 18:28:59.994238       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:28:59.994370       1 main.go:303] handling current node
	I0717 18:28:59.994403       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:28:59.994501       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:28:59.994773       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:28:59.994819       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:28:59.994923       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:28:59.994943       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:29:09.998952       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:29:09.999017       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:29:09.999255       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:29:09.999270       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:29:09.999406       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:29:10.000344       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:29:10.000622       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:29:10.000664       1 main.go:303] handling current node
	
	
	==> kube-apiserver [608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a] <==
	I0717 18:21:38.252099       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:21:38.373228       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 18:21:38.381360       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147]
	I0717 18:21:38.382525       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:21:38.387009       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:21:38.547630       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:21:39.651140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:21:39.662858       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 18:21:39.686588       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:21:52.652704       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:21:52.698997       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 18:24:32.833980       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54804: use of closed network connection
	E0717 18:24:33.027180       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54828: use of closed network connection
	E0717 18:24:33.216008       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54850: use of closed network connection
	E0717 18:24:33.485078       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0717 18:24:33.684042       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54896: use of closed network connection
	E0717 18:24:33.876765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54906: use of closed network connection
	E0717 18:24:34.054624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54920: use of closed network connection
	E0717 18:24:34.234190       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54930: use of closed network connection
	E0717 18:24:34.419918       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54958: use of closed network connection
	E0717 18:24:34.712765       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54988: use of closed network connection
	E0717 18:24:34.905198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55016: use of closed network connection
	E0717 18:24:35.077222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55038: use of closed network connection
	E0717 18:24:35.254551       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55046: use of closed network connection
	E0717 18:24:35.615640       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55082: use of closed network connection
	
	
	==> kube-controller-manager [f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b] <==
	I0717 18:24:01.118382       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-445282-m03\" does not exist"
	I0717 18:24:01.146825       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-445282-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:24:01.819643       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m03"
	I0717 18:24:28.418639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.559459ms"
	I0717 18:24:28.510178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.472106ms"
	I0717 18:24:28.669486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="159.238373ms"
	I0717 18:24:28.744707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.165689ms"
	I0717 18:24:28.865093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.334985ms"
	E0717 18:24:28.865122       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:24:28.865218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.906µs"
	I0717 18:24:28.870615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.303µs"
	I0717 18:24:29.594615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.228µs"
	I0717 18:24:31.072088       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.684µs"
	I0717 18:24:32.273183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.868353ms"
	I0717 18:24:32.274003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.322µs"
	I0717 18:24:32.402260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.215552ms"
	I0717 18:24:32.402386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.832µs"
	E0717 18:25:05.402853       1 certificate_controller.go:146] Sync csr-gbgzk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gbgzk": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:25:05.412298       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-445282-m04\" does not exist"
	I0717 18:25:05.466296       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-445282-m04" podCIDRs=["10.244.3.0/24"]
	I0717 18:25:06.872466       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m04"
	I0717 18:25:25.867148       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	I0717 18:26:18.707329       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	I0717 18:26:18.874735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.86891ms"
	I0717 18:26:18.874866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.448µs"
	
	
	==> kube-proxy [ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0] <==
	I0717 18:21:54.823974       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:21:54.839345       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0717 18:21:54.877596       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:21:54.877651       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:21:54.877666       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:21:54.880344       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:21:54.880665       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:21:54.880703       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:21:54.881819       1 config.go:192] "Starting service config controller"
	I0717 18:21:54.881952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:21:54.882002       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:21:54.882020       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:21:54.883767       1 config.go:319] "Starting node config controller"
	I0717 18:21:54.883806       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:21:54.982913       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:21:54.982938       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:21:54.984507       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65] <==
	W0717 18:21:36.624504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:21:36.624557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:21:37.471065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:21:37.471188       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:21:37.478243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:21:37.478323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:21:37.660393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:21:37.660512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:21:37.670045       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.670133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:37.831345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:21:37.831408       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:21:37.832239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.832474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:37.840820       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:21:37.840924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:21:37.977802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:21:37.977857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:21:38.130649       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:21:38.130764       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 18:21:41.385007       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 18:25:05.655243       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qltvc\": pod kube-proxy-qltvc is already assigned to node \"ha-445282-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qltvc" node="ha-445282-m04"
	E0717 18:25:05.655449       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dd75ca54-55d0-45de-ac57-6bbd0a22db78(kube-system/kube-proxy-qltvc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qltvc"
	E0717 18:25:05.655476       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qltvc\": pod kube-proxy-qltvc is already assigned to node \"ha-445282-m04\"" pod="kube-system/kube-proxy-qltvc"
	I0717 18:25:05.655503       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qltvc" node="ha-445282-m04"
	
	
	==> kubelet <==
	Jul 17 18:24:39 ha-445282 kubelet[1382]: E0717 18:24:39.589701    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:24:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:24:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:25:39 ha-445282 kubelet[1382]: E0717 18:25:39.588562    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:25:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:25:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:26:39 ha-445282 kubelet[1382]: E0717 18:26:39.588134    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:26:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:26:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:27:39 ha-445282 kubelet[1382]: E0717 18:27:39.588824    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:27:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:27:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:28:39 ha-445282 kubelet[1382]: E0717 18:28:39.587616    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:28:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:28:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:28:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:28:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-445282 -n ha-445282
helpers_test.go:261: (dbg) Run:  kubectl --context ha-445282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-445282 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-445282 -v=7 --alsologtostderr
E0717 18:30:05.951763  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:30:33.636475  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-445282 -v=7 --alsologtostderr: exit status 82 (2m1.898602589s)

                                                
                                                
-- stdout --
	* Stopping node "ha-445282-m04"  ...
	* Stopping node "ha-445282-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:29:13.446438  417501 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:29:13.446713  417501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:13.446724  417501 out.go:304] Setting ErrFile to fd 2...
	I0717 18:29:13.446728  417501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:29:13.447370  417501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:29:13.447813  417501 out.go:298] Setting JSON to false
	I0717 18:29:13.447966  417501 mustload.go:65] Loading cluster: ha-445282
	I0717 18:29:13.448536  417501 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:29:13.448684  417501 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:29:13.448909  417501 mustload.go:65] Loading cluster: ha-445282
	I0717 18:29:13.449067  417501 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:29:13.449115  417501 stop.go:39] StopHost: ha-445282-m04
	I0717 18:29:13.449512  417501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:13.449560  417501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:13.464340  417501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0717 18:29:13.464849  417501 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:13.465459  417501 main.go:141] libmachine: Using API Version  1
	I0717 18:29:13.465481  417501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:13.465907  417501 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:13.468291  417501 out.go:177] * Stopping node "ha-445282-m04"  ...
	I0717 18:29:13.469791  417501 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:29:13.469831  417501 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:29:13.470063  417501 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:29:13.470087  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:29:13.473038  417501 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:13.473511  417501 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:24:50 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:29:13.473540  417501 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:29:13.473734  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:29:13.473948  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:29:13.474091  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:29:13.474238  417501 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:29:13.564098  417501 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:29:13.617608  417501 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:29:13.671348  417501 main.go:141] libmachine: Stopping "ha-445282-m04"...
	I0717 18:29:13.671382  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:29:13.672956  417501 main.go:141] libmachine: (ha-445282-m04) Calling .Stop
	I0717 18:29:13.676644  417501 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 0/120
	I0717 18:29:14.881870  417501 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:29:14.883193  417501 main.go:141] libmachine: Machine "ha-445282-m04" was stopped.
	I0717 18:29:14.883213  417501 stop.go:75] duration metric: took 1.413425322s to stop
	I0717 18:29:14.883235  417501 stop.go:39] StopHost: ha-445282-m03
	I0717 18:29:14.883515  417501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:29:14.883555  417501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:29:14.898457  417501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0717 18:29:14.898889  417501 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:29:14.899382  417501 main.go:141] libmachine: Using API Version  1
	I0717 18:29:14.899403  417501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:29:14.899747  417501 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:29:14.902761  417501 out.go:177] * Stopping node "ha-445282-m03"  ...
	I0717 18:29:14.904235  417501 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:29:14.904264  417501 main.go:141] libmachine: (ha-445282-m03) Calling .DriverName
	I0717 18:29:14.904510  417501 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:29:14.904540  417501 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHHostname
	I0717 18:29:14.907337  417501 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:14.907832  417501 main.go:141] libmachine: (ha-445282-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:b1:51", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:23:21 +0000 UTC Type:0 Mac:52:54:00:da:b1:51 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:ha-445282-m03 Clientid:01:52:54:00:da:b1:51}
	I0717 18:29:14.907874  417501 main.go:141] libmachine: (ha-445282-m03) DBG | domain ha-445282-m03 has defined IP address 192.168.39.214 and MAC address 52:54:00:da:b1:51 in network mk-ha-445282
	I0717 18:29:14.908188  417501 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHPort
	I0717 18:29:14.908378  417501 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHKeyPath
	I0717 18:29:14.908575  417501 main.go:141] libmachine: (ha-445282-m03) Calling .GetSSHUsername
	I0717 18:29:14.908735  417501 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m03/id_rsa Username:docker}
	I0717 18:29:14.997333  417501 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:29:15.051810  417501 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:29:15.107152  417501 main.go:141] libmachine: Stopping "ha-445282-m03"...
	I0717 18:29:15.107179  417501 main.go:141] libmachine: (ha-445282-m03) Calling .GetState
	I0717 18:29:15.108744  417501 main.go:141] libmachine: (ha-445282-m03) Calling .Stop
	I0717 18:29:15.112273  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 0/120
	I0717 18:29:16.113760  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 1/120
	I0717 18:29:17.115896  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 2/120
	I0717 18:29:18.117228  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 3/120
	I0717 18:29:19.118536  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 4/120
	I0717 18:29:20.120462  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 5/120
	I0717 18:29:21.122058  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 6/120
	I0717 18:29:22.123469  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 7/120
	I0717 18:29:23.124604  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 8/120
	I0717 18:29:24.125988  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 9/120
	I0717 18:29:25.127842  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 10/120
	I0717 18:29:26.129081  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 11/120
	I0717 18:29:27.130596  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 12/120
	I0717 18:29:28.131885  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 13/120
	I0717 18:29:29.133863  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 14/120
	I0717 18:29:30.135260  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 15/120
	I0717 18:29:31.136523  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 16/120
	I0717 18:29:32.137884  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 17/120
	I0717 18:29:33.139344  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 18/120
	I0717 18:29:34.140672  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 19/120
	I0717 18:29:35.142717  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 20/120
	I0717 18:29:36.144098  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 21/120
	I0717 18:29:37.145676  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 22/120
	I0717 18:29:38.147099  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 23/120
	I0717 18:29:39.149524  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 24/120
	I0717 18:29:40.151276  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 25/120
	I0717 18:29:41.152655  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 26/120
	I0717 18:29:42.153936  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 27/120
	I0717 18:29:43.155297  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 28/120
	I0717 18:29:44.156743  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 29/120
	I0717 18:29:45.158466  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 30/120
	I0717 18:29:46.160891  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 31/120
	I0717 18:29:47.162073  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 32/120
	I0717 18:29:48.163486  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 33/120
	I0717 18:29:49.164966  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 34/120
	I0717 18:29:50.166740  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 35/120
	I0717 18:29:51.168105  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 36/120
	I0717 18:29:52.169564  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 37/120
	I0717 18:29:53.170825  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 38/120
	I0717 18:29:54.171996  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 39/120
	I0717 18:29:55.173718  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 40/120
	I0717 18:29:56.175090  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 41/120
	I0717 18:29:57.176436  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 42/120
	I0717 18:29:58.178012  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 43/120
	I0717 18:29:59.179272  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 44/120
	I0717 18:30:00.181203  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 45/120
	I0717 18:30:01.182571  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 46/120
	I0717 18:30:02.183823  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 47/120
	I0717 18:30:03.185497  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 48/120
	I0717 18:30:04.186828  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 49/120
	I0717 18:30:05.188659  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 50/120
	I0717 18:30:06.190013  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 51/120
	I0717 18:30:07.191636  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 52/120
	I0717 18:30:08.193073  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 53/120
	I0717 18:30:09.194504  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 54/120
	I0717 18:30:10.196361  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 55/120
	I0717 18:30:11.197824  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 56/120
	I0717 18:30:12.199358  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 57/120
	I0717 18:30:13.201202  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 58/120
	I0717 18:30:14.202649  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 59/120
	I0717 18:30:15.204248  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 60/120
	I0717 18:30:16.205727  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 61/120
	I0717 18:30:17.207106  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 62/120
	I0717 18:30:18.208468  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 63/120
	I0717 18:30:19.210150  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 64/120
	I0717 18:30:20.211806  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 65/120
	I0717 18:30:21.213018  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 66/120
	I0717 18:30:22.214443  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 67/120
	I0717 18:30:23.216254  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 68/120
	I0717 18:30:24.217574  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 69/120
	I0717 18:30:25.219304  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 70/120
	I0717 18:30:26.220586  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 71/120
	I0717 18:30:27.221851  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 72/120
	I0717 18:30:28.223142  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 73/120
	I0717 18:30:29.224499  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 74/120
	I0717 18:30:30.226286  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 75/120
	I0717 18:30:31.228017  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 76/120
	I0717 18:30:32.229332  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 77/120
	I0717 18:30:33.230863  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 78/120
	I0717 18:30:34.232118  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 79/120
	I0717 18:30:35.233729  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 80/120
	I0717 18:30:36.235180  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 81/120
	I0717 18:30:37.236453  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 82/120
	I0717 18:30:38.237743  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 83/120
	I0717 18:30:39.239100  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 84/120
	I0717 18:30:40.240744  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 85/120
	I0717 18:30:41.242025  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 86/120
	I0717 18:30:42.243212  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 87/120
	I0717 18:30:43.244681  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 88/120
	I0717 18:30:44.246021  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 89/120
	I0717 18:30:45.247528  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 90/120
	I0717 18:30:46.248773  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 91/120
	I0717 18:30:47.249972  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 92/120
	I0717 18:30:48.251593  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 93/120
	I0717 18:30:49.252839  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 94/120
	I0717 18:30:50.254456  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 95/120
	I0717 18:30:51.256260  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 96/120
	I0717 18:30:52.258331  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 97/120
	I0717 18:30:53.259737  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 98/120
	I0717 18:30:54.260995  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 99/120
	I0717 18:30:55.262778  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 100/120
	I0717 18:30:56.264273  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 101/120
	I0717 18:30:57.265698  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 102/120
	I0717 18:30:58.267362  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 103/120
	I0717 18:30:59.268589  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 104/120
	I0717 18:31:00.270299  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 105/120
	I0717 18:31:01.271876  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 106/120
	I0717 18:31:02.273266  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 107/120
	I0717 18:31:03.274947  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 108/120
	I0717 18:31:04.276470  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 109/120
	I0717 18:31:05.278209  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 110/120
	I0717 18:31:06.279965  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 111/120
	I0717 18:31:07.281448  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 112/120
	I0717 18:31:08.282810  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 113/120
	I0717 18:31:09.284205  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 114/120
	I0717 18:31:10.285938  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 115/120
	I0717 18:31:11.287189  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 116/120
	I0717 18:31:12.288560  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 117/120
	I0717 18:31:13.290285  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 118/120
	I0717 18:31:14.291604  417501 main.go:141] libmachine: (ha-445282-m03) Waiting for machine to stop 119/120
	I0717 18:31:15.292413  417501 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:31:15.292526  417501 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 18:31:15.294645  417501 out.go:177] 
	W0717 18:31:15.295985  417501 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 18:31:15.296006  417501 out.go:239] * 
	* 
	W0717 18:31:15.298926  417501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:31:15.300186  417501 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-445282 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-445282 --wait=true -v=7 --alsologtostderr
E0717 18:32:13.094203  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:33:36.142734  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:35:05.951666  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-445282 --wait=true -v=7 --alsologtostderr: (4m5.549476537s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-445282
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-445282 -n ha-445282
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-445282 logs -n 25: (1.860950981s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m04 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp testdata/cp-test.txt                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m04_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03:/home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m03 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-445282 node stop m02 -v=7                                                     | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-445282 node start m02 -v=7                                                    | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-445282 -v=7                                                           | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-445282 -v=7                                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-445282 --wait=true -v=7                                                    | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-445282                                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:31:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:31:15.349819  417974 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:31:15.350332  417974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:15.350350  417974 out.go:304] Setting ErrFile to fd 2...
	I0717 18:31:15.350359  417974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:15.350837  417974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:31:15.351820  417974 out.go:298] Setting JSON to false
	I0717 18:31:15.352878  417974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8018,"bootTime":1721233057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:31:15.352947  417974 start.go:139] virtualization: kvm guest
	I0717 18:31:15.355062  417974 out.go:177] * [ha-445282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:31:15.356651  417974 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:31:15.356714  417974 notify.go:220] Checking for updates...
	I0717 18:31:15.358908  417974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:31:15.360239  417974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:31:15.361497  417974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:31:15.362814  417974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:31:15.364037  417974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:31:15.365918  417974 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:15.366056  417974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:31:15.366681  417974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:15.366764  417974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:15.383167  417974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0717 18:31:15.383634  417974 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:15.384248  417974 main.go:141] libmachine: Using API Version  1
	I0717 18:31:15.384276  417974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:15.384773  417974 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:15.385042  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.422215  417974 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:31:15.423532  417974 start.go:297] selected driver: kvm2
	I0717 18:31:15.423549  417974 start.go:901] validating driver "kvm2" against &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:15.423677  417974 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:31:15.424014  417974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:15.424093  417974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:31:15.439117  417974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:31:15.439864  417974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:31:15.439950  417974 cni.go:84] Creating CNI manager for ""
	I0717 18:31:15.439967  417974 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 18:31:15.440042  417974 start.go:340] cluster config:
	{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:15.440192  417974 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:15.442171  417974 out.go:177] * Starting "ha-445282" primary control-plane node in "ha-445282" cluster
	I0717 18:31:15.443413  417974 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:15.443455  417974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:31:15.443466  417974 cache.go:56] Caching tarball of preloaded images
	I0717 18:31:15.443591  417974 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:31:15.443605  417974 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:31:15.443740  417974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:31:15.443955  417974 start.go:360] acquireMachinesLock for ha-445282: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:31:15.444006  417974 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "ha-445282"
	I0717 18:31:15.444026  417974 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:31:15.444036  417974 fix.go:54] fixHost starting: 
	I0717 18:31:15.444298  417974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:15.444339  417974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:15.459024  417974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I0717 18:31:15.459458  417974 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:15.459939  417974 main.go:141] libmachine: Using API Version  1
	I0717 18:31:15.459960  417974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:15.460258  417974 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:15.460461  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.460645  417974 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:31:15.462563  417974 fix.go:112] recreateIfNeeded on ha-445282: state=Running err=<nil>
	W0717 18:31:15.462582  417974 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:31:15.464709  417974 out.go:177] * Updating the running kvm2 "ha-445282" VM ...
	I0717 18:31:15.465969  417974 machine.go:94] provisionDockerMachine start ...
	I0717 18:31:15.465997  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.466218  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.468868  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.469341  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.469366  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.469548  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.469743  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.469922  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.470042  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.470201  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.470408  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.470421  417974 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:31:15.578669  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:31:15.578705  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.579006  417974 buildroot.go:166] provisioning hostname "ha-445282"
	I0717 18:31:15.579062  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.579311  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.582401  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.582857  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.582887  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.583000  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.583223  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.583375  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.583497  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.583636  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.583811  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.583822  417974 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282 && echo "ha-445282" | sudo tee /etc/hostname
	I0717 18:31:15.708579  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:31:15.708611  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.711369  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.711829  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.711862  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.712089  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.712349  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.712568  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.712755  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.712953  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.713251  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.713282  417974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:15.821917  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:15.821956  417974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:31:15.822020  417974 buildroot.go:174] setting up certificates
	I0717 18:31:15.822039  417974 provision.go:84] configureAuth start
	I0717 18:31:15.822050  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.822359  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:31:15.825046  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.825498  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.825526  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.825675  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.827929  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.828376  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.828398  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.828631  417974 provision.go:143] copyHostCerts
	I0717 18:31:15.828685  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:31:15.828725  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:31:15.828740  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:31:15.828811  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:31:15.828917  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:31:15.828934  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:31:15.828941  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:31:15.828979  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:31:15.829044  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:31:15.829061  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:31:15.829069  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:31:15.829109  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:31:15.829159  417974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282 san=[127.0.0.1 192.168.39.147 ha-445282 localhost minikube]
	I0717 18:31:15.952017  417974 provision.go:177] copyRemoteCerts
	I0717 18:31:15.952079  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:15.952108  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.955042  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.955386  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.955412  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.955565  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.955777  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.955985  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.956249  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:31:16.039403  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:31:16.039488  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:31:16.068546  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:31:16.068646  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 18:31:16.097350  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:31:16.097440  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:31:16.122084  417974 provision.go:87] duration metric: took 300.02862ms to configureAuth
	I0717 18:31:16.122119  417974 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:16.122560  417974 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:16.122677  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:16.125191  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:16.125605  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:16.125636  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:16.125790  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:16.126006  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:16.126207  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:16.126369  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:16.126544  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:16.126777  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:16.126797  417974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:32:46.960029  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:32:46.960066  417974 machine.go:97] duration metric: took 1m31.494073461s to provisionDockerMachine
	I0717 18:32:46.960097  417974 start.go:293] postStartSetup for "ha-445282" (driver="kvm2")
	I0717 18:32:46.960111  417974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:32:46.960135  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:46.960535  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:32:46.960578  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:46.964198  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:46.964869  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:46.964893  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:46.965072  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:46.965274  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:46.965459  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:46.965594  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.048203  417974 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:32:47.052734  417974 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:32:47.052763  417974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:32:47.052840  417974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:32:47.052931  417974 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:32:47.052944  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:32:47.053054  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:32:47.062755  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:32:47.088671  417974 start.go:296] duration metric: took 128.55067ms for postStartSetup
	I0717 18:32:47.088728  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.089052  417974 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 18:32:47.089102  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.091568  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.091929  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.091952  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.092146  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.092383  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.092579  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.092732  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	W0717 18:32:47.176415  417974 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 18:32:47.176449  417974 fix.go:56] duration metric: took 1m31.732414182s for fixHost
	I0717 18:32:47.176472  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.179208  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.179518  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.179549  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.179769  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.179995  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.180195  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.180398  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.180553  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:32:47.180763  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:32:47.180777  417974 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:32:47.289473  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241167.251273522
	
	I0717 18:32:47.289507  417974 fix.go:216] guest clock: 1721241167.251273522
	I0717 18:32:47.289515  417974 fix.go:229] Guest: 2024-07-17 18:32:47.251273522 +0000 UTC Remote: 2024-07-17 18:32:47.176455495 +0000 UTC m=+91.865165448 (delta=74.818027ms)
	I0717 18:32:47.289545  417974 fix.go:200] guest clock delta is within tolerance: 74.818027ms
	I0717 18:32:47.289554  417974 start.go:83] releasing machines lock for "ha-445282", held for 1m31.845536108s
	I0717 18:32:47.289676  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.289974  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:32:47.292370  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.292779  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.292810  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.292968  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293498  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293708  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293822  417974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:32:47.293866  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.293955  417974 ssh_runner.go:195] Run: cat /version.json
	I0717 18:32:47.294010  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.297101  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297513  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.297549  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297658  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297680  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.297870  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.298013  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.298088  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.298146  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.298160  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.298258  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.298427  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.298586  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.298739  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.398465  417974 ssh_runner.go:195] Run: systemctl --version
	I0717 18:32:47.404972  417974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:32:47.566918  417974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:32:47.575381  417974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:32:47.575460  417974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:32:47.585666  417974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:32:47.585692  417974 start.go:495] detecting cgroup driver to use...
	I0717 18:32:47.585752  417974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:32:47.602578  417974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:32:47.616547  417974 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:32:47.616603  417974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:32:47.630572  417974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:32:47.645635  417974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:32:47.808451  417974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:32:47.963011  417974 docker.go:233] disabling docker service ...
	I0717 18:32:47.963094  417974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:32:47.983633  417974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:32:48.000804  417974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:32:48.174007  417974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:32:48.320071  417974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:32:48.336089  417974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:32:48.355154  417974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:32:48.355215  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.366769  417974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:32:48.366835  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.378210  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.388824  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.399726  417974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:32:48.410860  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.421790  417974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.432509  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.443176  417974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:32:48.452986  417974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:32:48.462988  417974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:48.614270  417974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:32:50.711056  417974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.096733835s)
	I0717 18:32:50.711104  417974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:32:50.711175  417974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:32:50.716284  417974 start.go:563] Will wait 60s for crictl version
	I0717 18:32:50.716349  417974 ssh_runner.go:195] Run: which crictl
	I0717 18:32:50.720257  417974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:32:50.764053  417974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:32:50.764160  417974 ssh_runner.go:195] Run: crio --version
	I0717 18:32:50.793316  417974 ssh_runner.go:195] Run: crio --version
	I0717 18:32:50.823476  417974 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:32:50.824801  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:32:50.827602  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:50.828036  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:50.828063  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:50.828222  417974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:32:50.833105  417974 kubeadm.go:883] updating cluster {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:32:50.833292  417974 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:32:50.833351  417974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:32:50.882161  417974 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:32:50.882187  417974 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:32:50.882246  417974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:32:50.918801  417974 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:32:50.918832  417974 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:32:50.918843  417974 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.2 crio true true} ...
	I0717 18:32:50.918962  417974 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:32:50.919040  417974 ssh_runner.go:195] Run: crio config
	I0717 18:32:50.971008  417974 cni.go:84] Creating CNI manager for ""
	I0717 18:32:50.971032  417974 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 18:32:50.971051  417974 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:32:50.971075  417974 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-445282 NodeName:ha-445282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:32:50.971246  417974 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-445282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:32:50.971276  417974 kube-vip.go:115] generating kube-vip config ...
	I0717 18:32:50.971327  417974 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:32:50.984148  417974 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:32:50.984281  417974 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:32:50.984360  417974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:32:50.994594  417974 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:32:50.994674  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 18:32:51.004637  417974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 18:32:51.021125  417974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:32:51.037466  417974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 18:32:51.054162  417974 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:32:51.073237  417974 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:32:51.077095  417974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:51.228071  417974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:51.243845  417974 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.147
	I0717 18:32:51.243870  417974 certs.go:194] generating shared ca certs ...
	I0717 18:32:51.243887  417974 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.244047  417974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:32:51.244090  417974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:32:51.244099  417974 certs.go:256] generating profile certs ...
	I0717 18:32:51.244181  417974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:32:51.244209  417974 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e
	I0717 18:32:51.244224  417974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.214 192.168.39.254]
	I0717 18:32:51.360280  417974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e ...
	I0717 18:32:51.360320  417974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e: {Name:mkf49a6ec11aa829e1269ba54cc0595eb1191166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.360515  417974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e ...
	I0717 18:32:51.360531  417974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e: {Name:mk415e9bf668acc349201fe00a8a04c4a6d6499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.360618  417974 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:32:51.360778  417974 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:32:51.360916  417974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:32:51.360931  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:32:51.360944  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:32:51.360954  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:32:51.360966  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:32:51.360975  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:32:51.360986  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:32:51.360995  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:32:51.361007  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:32:51.361065  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:32:51.361095  417974 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:32:51.361102  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:32:51.361122  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:32:51.361144  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:32:51.361163  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:32:51.361199  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:32:51.361221  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.361244  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.361260  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.361856  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:32:51.388667  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:32:51.413024  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:32:51.437128  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:32:51.461105  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:32:51.485062  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:32:51.507823  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:32:51.530300  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:32:51.553251  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:32:51.575882  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:32:51.600808  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:32:51.624081  417974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:32:51.640316  417974 ssh_runner.go:195] Run: openssl version
	I0717 18:32:51.647532  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:32:51.658873  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.663281  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.663357  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.669147  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:32:51.679216  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:32:51.690863  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.695546  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.695613  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.701213  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:32:51.710722  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:32:51.722754  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.727230  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.727287  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.732805  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:32:51.742316  417974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:32:51.746852  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:32:51.752601  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:32:51.757953  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:32:51.763277  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:32:51.768648  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:32:51.774146  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:32:51.779895  417974 kubeadm.go:392] StartCluster: {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:32:51.780024  417974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:32:51.780074  417974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:32:51.819398  417974 cri.go:89] found id: "386a254963e27b5bc5449d987aac00c9a82c99a9a2e22bec541039092c57f295"
	I0717 18:32:51.819427  417974 cri.go:89] found id: "f2993e8e42bef1aa7bfbe816a678cba116cffcb0e26b47eaa3660a52f5aa2914"
	I0717 18:32:51.819435  417974 cri.go:89] found id: "5a94a87a35e84533ba262a0519c0e6c3520cb95c10257b1549084c0e27ce453c"
	I0717 18:32:51.819439  417974 cri.go:89] found id: "54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435"
	I0717 18:32:51.819443  417974 cri.go:89] found id: "408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570"
	I0717 18:32:51.819448  417974 cri.go:89] found id: "9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1"
	I0717 18:32:51.819452  417974 cri.go:89] found id: "6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22"
	I0717 18:32:51.819456  417974 cri.go:89] found id: "ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0"
	I0717 18:32:51.819460  417974 cri.go:89] found id: "ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1"
	I0717 18:32:51.819470  417974 cri.go:89] found id: "09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943"
	I0717 18:32:51.819474  417974 cri.go:89] found id: "608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a"
	I0717 18:32:51.819478  417974 cri.go:89] found id: "f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b"
	I0717 18:32:51.819481  417974 cri.go:89] found id: "585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65"
	I0717 18:32:51.819485  417974 cri.go:89] found id: ""
	I0717 18:32:51.819541  417974 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.569129385Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mcsw8,Uid:727368ca-3135-44f6-93b1-5cfb12476236,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241210713787603,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:24:28.412126588Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-445282,Uid:e18a23f8599513addef6c2bfc7f909b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721241192335710866,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{kubernetes.io/config.hash: e18a23f8599513addef6c2bfc7f909b3,kubernetes.io/config.seen: 2024-07-17T18:32:51.034129918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-28njs,Uid:1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177108380832,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-17T18:22:10.144301013Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzxbr,Uid:9630d87d-3470-4675-9b3c-a10ff614f5e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177025609960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:22:10.140213967Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-445282,Uid:8d0e44b0150b917f8f54d6a478ddc641,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177010644677,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d0e44b0150b917f8f54d6a478ddc641,kubernetes.io/config.seen: 2024-07-17T18:21:39.555859412Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&PodSandboxMetadata{Name:kindnet-75gcw,Uid:872c1132-e584-47c1-a873-74615d52511b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177005540831,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kube
rnetes.io/config.seen: 2024-07-17T18:21:52.727502293Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ae931c3b-8935-481d-bef4-0b05dad8c915,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177003668772,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/sto
rage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T18:22:10.152493513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-445282,Uid:058431b563c109d1ce3751345314cdc4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241177001596653,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,tier: control-plane,},Annotations:map[string]string
{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.147:8443,kubernetes.io/config.hash: 058431b563c109d1ce3751345314cdc4,kubernetes.io/config.seen: 2024-07-17T18:21:39.555865551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-445282,Uid:b71086ebffd4e15bc7c5f6152b697200,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241176989511943,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b71086ebffd4e15bc7c5f6152b697200,kubernetes.io/config.seen: 2024-07-17T18:21:39.555866582Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:94c07421a
1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&PodSandboxMetadata{Name:kube-proxy-vxmp8,Uid:cca555da-b93a-430c-8fbe-7e732af65a3a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241176982714724,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:21:52.727395024Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&PodSandboxMetadata{Name:etcd-ha-445282,Uid:5611ca3ae268bab43701867e47a0324e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721241176947384582,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-445282,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.147:2379,kubernetes.io/config.hash: 5611ca3ae268bab43701867e47a0324e,kubernetes.io/config.seen: 2024-07-17T18:21:39.555864433Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5737049c-2630-4d3b-ada0-0f727bb76040 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.569892345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e301dba-7f7f-4448-8aa7-f85c45871c5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.569968892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e301dba-7f7f-4448-8aa7-f85c45871c5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.570189694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3
e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kuberne
tes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae2
68bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e301dba-7f7f-4448-8aa7-f85c45871c5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.602009976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da6f6c8e-18b1-496c-b5cf-8fd892811ea4 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.602128910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da6f6c8e-18b1-496c-b5cf-8fd892811ea4 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.603188592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=762983d0-ecb4-4083-95a0-f8c11df9d673 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.603800894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241321603777619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=762983d0-ecb4-4083-95a0-f8c11df9d673 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.604400881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca80b33c-b803-4cb6-9d95-8974d8d37f61 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.604529646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca80b33c-b803-4cb6-9d95-8974d8d37f61 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.604998247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca80b33c-b803-4cb6-9d95-8974d8d37f61 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.654784486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=014be27c-39e6-4ae1-8076-72559b2a1613 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.654924382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=014be27c-39e6-4ae1-8076-72559b2a1613 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.658847221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b16504bd-6197-4c01-b776-98d632a20abe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.659606219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241321659574147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b16504bd-6197-4c01-b776-98d632a20abe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.664695350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6f14f46-4abe-4e5f-94d7-a7bec9db7b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.664776367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6f14f46-4abe-4e5f-94d7-a7bec9db7b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.665221426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6f14f46-4abe-4e5f-94d7-a7bec9db7b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.707797538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a7806e3-e671-4de0-9c63-1369839de1ba name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.708101497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a7806e3-e671-4de0-9c63-1369839de1ba name=/runtime.v1.RuntimeService/Version
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.709341982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2be9a0a-570a-4e3c-af10-e160b139c7d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.709840279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241321709816595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2be9a0a-570a-4e3c-af10-e160b139c7d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.710321424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0452abe7-9c3d-4058-bee1-a41190f0ca81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.710394325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0452abe7-9c3d-4058-bee1-a41190f0ca81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:35:21 ha-445282 crio[3743]: time="2024-07-17 18:35:21.710888701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0452abe7-9c3d-4058-bee1-a41190f0ca81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1a00a7846ea18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      47 seconds ago       Running             storage-provisioner       4                   fb722c49526ac       storage-provisioner
	ef4c83460a423       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   fb722c49526ac       storage-provisioner
	36072ba2e3068       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            3                   6acf1b0fa81b2       kube-apiserver-ha-445282
	c1c4613274660       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e9f8c63ebeab8       busybox-fc5497c4f-mcsw8
	815de2dec486d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   2                   c704554e3847d       kube-controller-manager-ha-445282
	c3563496ca45d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   fdcd052ef590c       kube-vip-ha-445282
	d81bc3c984acc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   4f67bfed73cfa       coredns-7db6d8ff4d-28njs
	171da935275e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   9617b3bb8c48c       coredns-7db6d8ff4d-rzxbr
	e35112af49fc7       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   c704554e3847d       kube-controller-manager-ha-445282
	e025856fa899d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   6acf1b0fa81b2       kube-apiserver-ha-445282
	81e0f6fe1021c       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   94c07421a1bf5       kube-proxy-vxmp8
	ae9c1607affb3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      2 minutes ago        Running             kindnet-cni               1                   97e958e9cbe30       kindnet-75gcw
	43252ed2b3b54       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   858a82ac2c20e       kube-scheduler-ha-445282
	a26c2a38c92d3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   57a78f36912ce       etcd-ha-445282
	46bb59b8c88a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   c6775eb0d5980       busybox-fc5497c4f-mcsw8
	408ccf9c4f5cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   7904758cf99a7       coredns-7db6d8ff4d-rzxbr
	9c8f03436294a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   1b4104fef2aba       coredns-7db6d8ff4d-28njs
	6e8619164a43b       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    13 minutes ago       Exited              kindnet-cni               0                   ea48366339cf7       kindnet-75gcw
	ab95f55f84d8d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago       Exited              kube-proxy                0                   9798b06dd09f9       kube-proxy-vxmp8
	09fdf7de5bf8c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   d2f7bf6b169d4       etcd-ha-445282
	585303a41caea       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago       Exited              kube-scheduler            0                   b5b8e1d746c8d       kube-scheduler-ha-445282
	
	
	==> coredns [171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[529904031]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 18:33:06.720) (total time: 10001ms):
	Trace[529904031]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:33:16.722)
	Trace[529904031]: [10.001905183s] [10.001905183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:42966->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:42966->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570] <==
	[INFO] 10.244.1.2:42067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193888s
	[INFO] 10.244.1.2:38612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103227s
	[INFO] 10.244.0.4:44523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001703135s
	[INFO] 10.244.0.4:59477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107361s
	[INFO] 10.244.0.4:56198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108839s
	[INFO] 10.244.0.4:38398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004501s
	[INFO] 10.244.0.4:41070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061061s
	[INFO] 10.244.2.2:37193 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186169s
	[INFO] 10.244.2.2:47175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259008s
	[INFO] 10.244.2.2:43118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117844s
	[INFO] 10.244.2.2:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104875s
	[INFO] 10.244.1.2:43839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163961s
	[INFO] 10.244.1.2:57262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014754s
	[INFO] 10.244.1.2:59861 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089161s
	[INFO] 10.244.0.4:35507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101753s
	[INFO] 10.244.0.4:50990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048865s
	[INFO] 10.244.2.2:35692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101106s
	[INFO] 10.244.2.2:47438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106571s
	[INFO] 10.244.0.4:37290 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140704s
	[INFO] 10.244.0.4:37755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145358s
	[INFO] 10.244.2.2:58729 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097845s
	[INFO] 10.244.2.2:47405 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008526s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1859&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1] <==
	[INFO] 10.244.1.2:59627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.080250702s
	[INFO] 10.244.0.4:51929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136107s
	[INFO] 10.244.0.4:36818 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096811s
	[INFO] 10.244.0.4:42583 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001301585s
	[INFO] 10.244.2.2:59932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203977s
	[INFO] 10.244.2.2:50906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207365s
	[INFO] 10.244.2.2:41438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168363s
	[INFO] 10.244.2.2:47479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170645s
	[INFO] 10.244.1.2:54595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208251s
	[INFO] 10.244.0.4:34251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081496s
	[INFO] 10.244.0.4:35201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063768s
	[INFO] 10.244.2.2:50926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154679s
	[INFO] 10.244.2.2:39243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122767s
	[INFO] 10.244.1.2:50770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014514s
	[INFO] 10.244.1.2:37706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166071s
	[INFO] 10.244.1.2:53197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306441s
	[INFO] 10.244.1.2:34142 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128366s
	[INFO] 10.244.0.4:60617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102661s
	[INFO] 10.244.0.4:54474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060033s
	[INFO] 10.244.2.2:50977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014662s
	[INFO] 10.244.2.2:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013261s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1896&timeout=6m38s&timeoutSeconds=398&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1892&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954] <==
	Trace[339431070]: [10.001650188s] [10.001650188s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[733380150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 18:33:07.161) (total time: 10000ms):
	Trace[733380150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:33:17.162)
	Trace[733380150]: [10.000989737s] [10.000989737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44808->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44808->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-445282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:21:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:35:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-445282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1ea799c4fd84c5c8c95385b6a2349f7
	  System UUID:                d1ea799c-4fd8-4c5c-8c95-385b6a2349f7
	  Boot ID:                    58e8f531-06d1-4b66-9fa8-93cd9d417ce6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mcsw8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-28njs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-rzxbr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-445282                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-75gcw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-445282             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-445282    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-vxmp8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-445282             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-445282                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-445282 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-445282 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-445282 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-445282 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Warning  ContainerGCFailed        2m43s (x2 over 3m43s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           95s                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	
	
	Name:               ha-445282-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:22:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:35:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-445282-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dee104babdb45fe968765f68a06ccd6
	  System UUID:                5dee104b-abdb-45fe-9687-65f68a06ccd6
	  Boot ID:                    b905f24f-7dbb-4fff-9e1c-fcdeea9c5023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blwvw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-445282-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-mdqdz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-445282-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-445282-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-xs65r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-445282-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-445282-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  NodeNotReady             9m4s                 node-controller  Node ha-445282-m02 status is now: NodeNotReady
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           91s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           31s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	
	
	Name:               ha-445282-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_24_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:24:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:35:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:34:54 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:34:54 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:34:54 +0000   Wed, 17 Jul 2024 18:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:34:54 +0000   Wed, 17 Jul 2024 18:24:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    ha-445282-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a37bfc2af28c4be69cd12d6b627c60fb
	  System UUID:                a37bfc2a-f28c-4be6-9cd1-2d6b627c60fb
	  Boot ID:                    dd3a4cdf-6875-4e2a-8a82-0a899322ea66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjpp8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-445282-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-x62t5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-445282-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-445282-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-zb54p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-445282-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-445282-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-445282-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-445282-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node ha-445282-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node ha-445282-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node ha-445282-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-445282-m03 has been rebooted, boot id: dd3a4cdf-6875-4e2a-8a82-0a899322ea66
	  Normal   RegisteredNode           31s                node-controller  Node ha-445282-m03 event: Registered Node ha-445282-m03 in Controller
	
	
	Name:               ha-445282-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_25_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:35:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:35:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:35:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:35:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:35:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-445282-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55cbb1c4afb849b39c587987c52eb826
	  System UUID:                55cbb1c4-afb8-49b3-9c58-7987c52eb826
	  Boot ID:                    df7937d5-dc45-4a61-8a87-51d72c4268f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nx7rb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-jstdw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-445282-m04 status is now: NodeReady
	  Normal   RegisteredNode           95s                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeNotReady             55s                node-controller  Node ha-445282-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-445282-m04 has been rebooted, boot id: df7937d5-dc45-4a61-8a87-51d72c4268f1
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-445282-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.891308] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.193800] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.120214] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.274662] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.047178] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.805512] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055406] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996103] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.082270] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.197381] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.192890] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 18:22] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 18:32] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.156972] systemd-fstab-generator[3675]: Ignoring "noauto" option for root device
	[  +0.202401] systemd-fstab-generator[3689]: Ignoring "noauto" option for root device
	[  +0.158844] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	[  +0.289910] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +2.618857] systemd-fstab-generator[3828]: Ignoring "noauto" option for root device
	[  +5.877742] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 18:33] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.064868] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.240777] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.384053] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943] <==
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T18:31:16.317849Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.147:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:31:16.317951Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.147:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:31:16.318046Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c194f0f1585e7a7d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T18:31:16.318317Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318356Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318405Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318627Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318697Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318731Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318758Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318798Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318903Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318951Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.319032Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.319067Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.321712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-07-17T18:31:16.321955Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-07-17T18:31:16.322048Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-445282","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.147:2380"],"advertise-client-urls":["https://192.168.39.147:2379"]}
	
	
	==> etcd [a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073] <==
	{"level":"warn","ts":"2024-07-17T18:34:22.12021Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.214:2380/version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:22.120376Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:23.234721Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:23.234747Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:26.122632Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.214:2380/version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:26.122686Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:28.235364Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:28.235368Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:30.124037Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.214:2380/version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:30.1241Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:33.236574Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:33.236624Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e193095643f373c5","rtt":"0s","error":"dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:34.125802Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.214:2380/version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:34.125931Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e193095643f373c5","error":"Get \"https://192.168.39.214:2380/version\": dial tcp 192.168.39.214:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T18:34:35.384238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.946813ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14239696759588332699 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2341 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:372 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T18:34:35.384529Z","caller":"traceutil/trace.go:171","msg":"trace[1091171750] linearizableReadLoop","detail":"{readStateIndex:2735; appliedIndex:2734; }","duration":"105.685767ms","start":"2024-07-17T18:34:35.278815Z","end":"2024-07-17T18:34:35.384501Z","steps":["trace[1091171750] 'read index received'  (duration: 1.044331ms)","trace[1091171750] 'applied index is now lower than readState.Index'  (duration: 104.640278ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T18:34:35.384624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.798656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T18:34:35.384665Z","caller":"traceutil/trace.go:171","msg":"trace[1956142068] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2347; }","duration":"105.866973ms","start":"2024-07-17T18:34:35.278788Z","end":"2024-07-17T18:34:35.384655Z","steps":["trace[1956142068] 'agreement among raft nodes before linearized reading'  (duration: 105.804466ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:34:36.095399Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.095716Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.096263Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.124208Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c194f0f1585e7a7d","to":"e193095643f373c5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T18:34:36.124367Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.137094Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c194f0f1585e7a7d","to":"e193095643f373c5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T18:34:36.13723Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	
	
	==> kernel <==
	 18:35:22 up 14 min,  0 users,  load average: 0.19, 0.31, 0.23
	Linux ha-445282 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22] <==
	I0717 18:30:50.000035       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:30:50.000209       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:30:50.000493       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:30:50.000538       1 main.go:303] handling current node
	I0717 18:30:50.000589       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:30:50.000607       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:30:50.000695       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:30:50.000714       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:30:59.993772       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:30:59.993805       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:30:59.994002       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:30:59.994031       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:30:59.994083       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:30:59.994105       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:30:59.994170       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:30:59.994192       1 main.go:303] handling current node
	E0717 18:31:09.573998       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1872&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	I0717 18:31:09.994558       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:31:09.994616       1 main.go:303] handling current node
	I0717 18:31:09.994831       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:31:09.994848       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:31:09.995079       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:31:09.995116       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:31:09.995233       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:31:09.995261       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf] <==
	I0717 18:34:48.710790       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:34:58.710713       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:34:58.710860       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:34:58.711088       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:34:58.711121       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:34:58.711231       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:34:58.711254       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:34:58.711503       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:34:58.711534       1 main.go:303] handling current node
	I0717 18:35:08.717586       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:35:08.717650       1 main.go:303] handling current node
	I0717 18:35:08.717691       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:35:08.717701       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:35:08.717954       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:35:08.717988       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:35:08.718085       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:35:08.718113       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:35:18.711332       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:35:18.711487       1 main.go:303] handling current node
	I0717 18:35:18.711503       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:35:18.711537       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:35:18.711687       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:35:18.711710       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:35:18.711766       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:35:18.711788       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe] <==
	I0717 18:33:39.462464       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 18:33:39.462476       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 18:33:39.550595       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:33:39.551773       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:33:39.552399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:33:39.552520       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:33:39.552550       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:33:39.553343       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:33:39.558249       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 18:33:39.562458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:33:39.562550       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:33:39.562567       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:33:39.562572       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:33:39.562577       1 cache.go:39] Caches are synced for autoregister controller
	W0717 18:33:39.580665       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198 192.168.39.214]
	I0717 18:33:39.581614       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:33:39.592787       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:33:39.592919       1 policy_source.go:224] refreshing policies
	I0717 18:33:39.633972       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:33:39.683397       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:33:39.691180       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 18:33:39.695323       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 18:33:40.454267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 18:33:41.026298       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.198 192.168.39.214]
	W0717 18:33:51.020659       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.198]
	
	
	==> kube-apiserver [e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074] <==
	I0717 18:32:58.382283       1 options.go:221] external host was not specified, using 192.168.39.147
	I0717 18:32:58.384712       1 server.go:148] Version: v1.30.2
	I0717 18:32:58.385755       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:58.898303       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 18:32:58.902775       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:32:58.906127       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 18:32:58.906271       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 18:32:58.906744       1 instance.go:299] Using reconciler: lease
	W0717 18:33:18.897607       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 18:33:18.897719       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 18:33:18.908094       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 18:33:18.908129       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3] <==
	I0717 18:33:51.767523       1 shared_informer.go:320] Caches are synced for taint
	I0717 18:33:51.767647       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 18:33:51.767773       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m03"
	I0717 18:33:51.767821       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m04"
	I0717 18:33:51.767861       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282"
	I0717 18:33:51.767893       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-445282-m02"
	I0717 18:33:51.768020       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 18:33:52.174839       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:33:52.225512       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:33:52.225557       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:34:00.503708       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vv46p EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vv46p\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 18:34:00.504335       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c1dfbc02-f6dd-489c-a7fd-15bb44a9c3cd", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vv46p EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vv46p": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:34:00.532943       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vv46p EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vv46p\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 18:34:00.533685       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"c1dfbc02-f6dd-489c-a7fd-15bb44a9c3cd", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vv46p EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vv46p": the object has been modified; please apply your changes to the latest version and try again
	I0717 18:34:00.565125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="98.43914ms"
	I0717 18:34:00.607321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.132237ms"
	I0717 18:34:00.607778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="108.839µs"
	I0717 18:34:00.856183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.689µs"
	I0717 18:34:07.251198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.992952ms"
	I0717 18:34:07.251382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.531µs"
	I0717 18:34:24.888058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.024412ms"
	I0717 18:34:24.888345       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.397µs"
	I0717 18:34:44.422038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.076203ms"
	I0717 18:34:44.422333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.934µs"
	I0717 18:35:13.487544       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	
	
	==> kube-controller-manager [e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51] <==
	I0717 18:32:58.869210       1 serving.go:380] Generated self-signed cert in-memory
	I0717 18:32:59.159486       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 18:32:59.159583       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:59.161160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 18:32:59.161568       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 18:32:59.161582       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 18:32:59.161597       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 18:33:19.914169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.147:8443/healthz\": dial tcp 192.168.39.147:8443: connect: connection refused"
	
	
	==> kube-proxy [81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434] <==
	I0717 18:32:59.235401       1 server_linux.go:69] "Using iptables proxy"
	E0717 18:33:00.038232       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:03.111707       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:06.182392       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:12.327854       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:24.613915       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 18:33:42.807993       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0717 18:33:42.845715       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:33:42.845796       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:33:42.845822       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:33:42.848852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:33:42.849122       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:33:42.849149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:33:42.850714       1 config.go:192] "Starting service config controller"
	I0717 18:33:42.850770       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:33:42.850798       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:33:42.850818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:33:42.851675       1 config.go:319] "Starting node config controller"
	I0717 18:33:42.851723       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:33:42.950976       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:33:42.951037       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:33:42.952508       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0] <==
	E0717 18:30:01.862614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.933928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.934278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.934536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:11.079483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:11.079600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:11.079515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:11.079774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:14.163064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:14.163657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:20.295355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:20.295451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:23.366093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:23.366150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:29.511120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:29.511198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:38.726673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:38.726789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:41.798821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:41.798869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:47.942170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:47.942247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd] <==
	W0717 18:33:29.792601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.147:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:29.792653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.147:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:30.399986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.147:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:30.400234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.147:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:33.737704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.147:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:33.737825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.147:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:35.099939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:35.100010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:35.783175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.147:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:35.783335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.147:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:36.315631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.147:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:36.315816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.147:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:36.540228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.147:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:36.540307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.147:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:37.199145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.147:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:37.199272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.147:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:39.481357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:33:39.481539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:33:39.481747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:33:39.481848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:33:39.481983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:33:39.482075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:33:39.482198       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:33:39.482315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0717 18:33:40.723687       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65] <==
	W0717 18:31:12.565002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:31:12.565168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:31:12.608786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:12.608884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:12.763667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:12.763708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:12.894632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:31:12.894685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:31:12.896005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:31:12.896118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:31:13.089839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:31:13.089893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:31:13.222876       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:13.222924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:13.260627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:31:13.260660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:31:13.279645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:13.279675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:13.678514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:31:13.678654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:31:14.025047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:31:14.025142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:31:16.201762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:16.201817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:16.236579       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 18:33:39 ha-445282 kubelet[1382]: I0717 18:33:39.661273    1382 scope.go:117] "RemoveContainer" containerID="5a94a87a35e84533ba262a0519c0e6c3520cb95c10257b1549084c0e27ce453c"
	Jul 17 18:33:39 ha-445282 kubelet[1382]: E0717 18:33:39.974005    1382 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-445282\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 18:33:39 ha-445282 kubelet[1382]: E0717 18:33:39.974021    1382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-445282?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 17 18:33:39 ha-445282 kubelet[1382]: I0717 18:33:39.974104    1382 status_manager.go:853] "Failed to get status for pod" podUID="058431b563c109d1ce3751345314cdc4" pod="kube-system/kube-apiserver-ha-445282" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 18:33:39 ha-445282 kubelet[1382]: W0717 18:33:39.974261    1382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 18:33:39 ha-445282 kubelet[1382]: E0717 18:33:39.974335    1382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1858": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 18:33:42 ha-445282 kubelet[1382]: I0717 18:33:42.268062    1382 scope.go:117] "RemoveContainer" containerID="5584b455114ce8b979c86b70d63c7cbee8da2eabf6659f6ae26c736fa92507d4"
	Jul 17 18:33:42 ha-445282 kubelet[1382]: I0717 18:33:42.269945    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:33:42 ha-445282 kubelet[1382]: E0717 18:33:42.276018    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae931c3b-8935-481d-bef4-0b05dad8c915)\"" pod="kube-system/storage-provisioner" podUID="ae931c3b-8935-481d-bef4-0b05dad8c915"
	Jul 17 18:33:57 ha-445282 kubelet[1382]: I0717 18:33:57.568353    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:33:57 ha-445282 kubelet[1382]: E0717 18:33:57.568705    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae931c3b-8935-481d-bef4-0b05dad8c915)\"" pod="kube-system/storage-provisioner" podUID="ae931c3b-8935-481d-bef4-0b05dad8c915"
	Jul 17 18:33:59 ha-445282 kubelet[1382]: I0717 18:33:59.310530    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-mcsw8" podStartSLOduration=568.621432765 podStartE2EDuration="9m31.310501469s" podCreationTimestamp="2024-07-17 18:24:28 +0000 UTC" firstStartedPulling="2024-07-17 18:24:28.975903626 +0000 UTC m=+169.528827671" lastFinishedPulling="2024-07-17 18:24:31.664972326 +0000 UTC m=+172.217896375" observedRunningTime="2024-07-17 18:24:32.364934932 +0000 UTC m=+172.917858996" watchObservedRunningTime="2024-07-17 18:33:59.310501469 +0000 UTC m=+739.863425532"
	Jul 17 18:34:09 ha-445282 kubelet[1382]: I0717 18:34:09.568956    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:34:09 ha-445282 kubelet[1382]: E0717 18:34:09.570090    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae931c3b-8935-481d-bef4-0b05dad8c915)\"" pod="kube-system/storage-provisioner" podUID="ae931c3b-8935-481d-bef4-0b05dad8c915"
	Jul 17 18:34:20 ha-445282 kubelet[1382]: I0717 18:34:20.568773    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:34:20 ha-445282 kubelet[1382]: E0717 18:34:20.568997    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae931c3b-8935-481d-bef4-0b05dad8c915)\"" pod="kube-system/storage-provisioner" podUID="ae931c3b-8935-481d-bef4-0b05dad8c915"
	Jul 17 18:34:32 ha-445282 kubelet[1382]: I0717 18:34:32.569216    1382 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-445282" podUID="ca5bcedd-e43a-4711-bdfc-dc1c2c524d86"
	Jul 17 18:34:32 ha-445282 kubelet[1382]: I0717 18:34:32.589509    1382 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-445282"
	Jul 17 18:34:34 ha-445282 kubelet[1382]: I0717 18:34:34.568792    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:34:35 ha-445282 kubelet[1382]: I0717 18:34:35.573941    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-445282" podStartSLOduration=3.573915595 podStartE2EDuration="3.573915595s" podCreationTimestamp="2024-07-17 18:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:34:35.573682727 +0000 UTC m=+776.126606792" watchObservedRunningTime="2024-07-17 18:34:35.573915595 +0000 UTC m=+776.126839641"
	Jul 17 18:34:39 ha-445282 kubelet[1382]: E0717 18:34:39.587875    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:34:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:21.250290  419306 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19282-392903/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-445282 -n ha-445282
helpers_test.go:261: (dbg) Run:  kubectl --context ha-445282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 stop -v=7 --alsologtostderr
E0717 18:37:13.091505  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 stop -v=7 --alsologtostderr: exit status 82 (2m0.472546918s)

                                                
                                                
-- stdout --
	* Stopping node "ha-445282-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:35:41.010785  419717 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:35:41.010896  419717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:35:41.010908  419717 out.go:304] Setting ErrFile to fd 2...
	I0717 18:35:41.010915  419717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:35:41.011167  419717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:35:41.011414  419717 out.go:298] Setting JSON to false
	I0717 18:35:41.011493  419717 mustload.go:65] Loading cluster: ha-445282
	I0717 18:35:41.011857  419717 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:35:41.011952  419717 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:35:41.012164  419717 mustload.go:65] Loading cluster: ha-445282
	I0717 18:35:41.012312  419717 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:35:41.012344  419717 stop.go:39] StopHost: ha-445282-m04
	I0717 18:35:41.012765  419717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:35:41.012813  419717 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:35:41.029195  419717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45017
	I0717 18:35:41.029747  419717 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:35:41.030340  419717 main.go:141] libmachine: Using API Version  1
	I0717 18:35:41.030360  419717 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:35:41.030754  419717 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:35:41.034684  419717 out.go:177] * Stopping node "ha-445282-m04"  ...
	I0717 18:35:41.036264  419717 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:35:41.036314  419717 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:35:41.036614  419717 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:35:41.036643  419717 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:35:41.039760  419717 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:35:41.040243  419717 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:35:07 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:35:41.040283  419717 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:35:41.040451  419717 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:35:41.040668  419717 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:35:41.040816  419717 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:35:41.041004  419717 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	I0717 18:35:41.123436  419717 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:35:41.177896  419717 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:35:41.230969  419717 main.go:141] libmachine: Stopping "ha-445282-m04"...
	I0717 18:35:41.231037  419717 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:35:41.232686  419717 main.go:141] libmachine: (ha-445282-m04) Calling .Stop
	I0717 18:35:41.236480  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 0/120
	I0717 18:35:42.237970  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 1/120
	I0717 18:35:43.239112  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 2/120
	I0717 18:35:44.240408  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 3/120
	I0717 18:35:45.241831  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 4/120
	I0717 18:35:46.243866  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 5/120
	I0717 18:35:47.245485  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 6/120
	I0717 18:35:48.246884  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 7/120
	I0717 18:35:49.248376  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 8/120
	I0717 18:35:50.249667  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 9/120
	I0717 18:35:51.252082  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 10/120
	I0717 18:35:52.253586  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 11/120
	I0717 18:35:53.255659  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 12/120
	I0717 18:35:54.256982  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 13/120
	I0717 18:35:55.258331  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 14/120
	I0717 18:35:56.260585  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 15/120
	I0717 18:35:57.261948  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 16/120
	I0717 18:35:58.263584  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 17/120
	I0717 18:35:59.264927  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 18/120
	I0717 18:36:00.267017  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 19/120
	I0717 18:36:01.269284  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 20/120
	I0717 18:36:02.271200  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 21/120
	I0717 18:36:03.273370  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 22/120
	I0717 18:36:04.274893  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 23/120
	I0717 18:36:05.276450  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 24/120
	I0717 18:36:06.278386  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 25/120
	I0717 18:36:07.279827  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 26/120
	I0717 18:36:08.281684  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 27/120
	I0717 18:36:09.283018  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 28/120
	I0717 18:36:10.284586  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 29/120
	I0717 18:36:11.286646  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 30/120
	I0717 18:36:12.288213  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 31/120
	I0717 18:36:13.289633  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 32/120
	I0717 18:36:14.291013  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 33/120
	I0717 18:36:15.292530  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 34/120
	I0717 18:36:16.294315  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 35/120
	I0717 18:36:17.295814  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 36/120
	I0717 18:36:18.297159  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 37/120
	I0717 18:36:19.298594  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 38/120
	I0717 18:36:20.299842  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 39/120
	I0717 18:36:21.301720  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 40/120
	I0717 18:36:22.303194  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 41/120
	I0717 18:36:23.304587  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 42/120
	I0717 18:36:24.305944  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 43/120
	I0717 18:36:25.307167  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 44/120
	I0717 18:36:26.309269  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 45/120
	I0717 18:36:27.310791  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 46/120
	I0717 18:36:28.312163  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 47/120
	I0717 18:36:29.313733  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 48/120
	I0717 18:36:30.315175  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 49/120
	I0717 18:36:31.317426  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 50/120
	I0717 18:36:32.319013  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 51/120
	I0717 18:36:33.320363  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 52/120
	I0717 18:36:34.321874  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 53/120
	I0717 18:36:35.323977  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 54/120
	I0717 18:36:36.325995  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 55/120
	I0717 18:36:37.327099  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 56/120
	I0717 18:36:38.328449  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 57/120
	I0717 18:36:39.329821  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 58/120
	I0717 18:36:40.331109  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 59/120
	I0717 18:36:41.333226  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 60/120
	I0717 18:36:42.334558  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 61/120
	I0717 18:36:43.336045  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 62/120
	I0717 18:36:44.337816  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 63/120
	I0717 18:36:45.339438  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 64/120
	I0717 18:36:46.341516  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 65/120
	I0717 18:36:47.342805  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 66/120
	I0717 18:36:48.344228  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 67/120
	I0717 18:36:49.345593  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 68/120
	I0717 18:36:50.346944  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 69/120
	I0717 18:36:51.349025  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 70/120
	I0717 18:36:52.351052  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 71/120
	I0717 18:36:53.352616  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 72/120
	I0717 18:36:54.353986  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 73/120
	I0717 18:36:55.355365  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 74/120
	I0717 18:36:56.357169  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 75/120
	I0717 18:36:57.359397  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 76/120
	I0717 18:36:58.361592  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 77/120
	I0717 18:36:59.362980  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 78/120
	I0717 18:37:00.364754  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 79/120
	I0717 18:37:01.366895  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 80/120
	I0717 18:37:02.368387  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 81/120
	I0717 18:37:03.369694  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 82/120
	I0717 18:37:04.371075  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 83/120
	I0717 18:37:05.372607  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 84/120
	I0717 18:37:06.374514  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 85/120
	I0717 18:37:07.375966  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 86/120
	I0717 18:37:08.377865  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 87/120
	I0717 18:37:09.379527  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 88/120
	I0717 18:37:10.380848  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 89/120
	I0717 18:37:11.382459  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 90/120
	I0717 18:37:12.384005  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 91/120
	I0717 18:37:13.385505  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 92/120
	I0717 18:37:14.387037  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 93/120
	I0717 18:37:15.388293  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 94/120
	I0717 18:37:16.390390  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 95/120
	I0717 18:37:17.391812  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 96/120
	I0717 18:37:18.393197  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 97/120
	I0717 18:37:19.394439  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 98/120
	I0717 18:37:20.395860  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 99/120
	I0717 18:37:21.398081  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 100/120
	I0717 18:37:22.399290  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 101/120
	I0717 18:37:23.401350  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 102/120
	I0717 18:37:24.402539  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 103/120
	I0717 18:37:25.403864  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 104/120
	I0717 18:37:26.405327  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 105/120
	I0717 18:37:27.406513  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 106/120
	I0717 18:37:28.408041  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 107/120
	I0717 18:37:29.409281  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 108/120
	I0717 18:37:30.410895  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 109/120
	I0717 18:37:31.412988  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 110/120
	I0717 18:37:32.414455  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 111/120
	I0717 18:37:33.416378  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 112/120
	I0717 18:37:34.417892  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 113/120
	I0717 18:37:35.419198  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 114/120
	I0717 18:37:36.421214  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 115/120
	I0717 18:37:37.422830  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 116/120
	I0717 18:37:38.424187  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 117/120
	I0717 18:37:39.425489  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 118/120
	I0717 18:37:40.426767  419717 main.go:141] libmachine: (ha-445282-m04) Waiting for machine to stop 119/120
	I0717 18:37:41.427613  419717 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:37:41.427696  419717 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 18:37:41.429937  419717 out.go:177] 
	W0717 18:37:41.431483  419717 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 18:37:41.431501  419717 out.go:239] * 
	* 
	W0717 18:37:41.434505  419717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:37:41.435842  419717 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-445282 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr: exit status 3 (19.003382737s)

                                                
                                                
-- stdout --
	ha-445282
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-445282-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:37:41.487428  420147 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:41.487540  420147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:41.487547  420147 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:41.487551  420147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:41.487724  420147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:37:41.487906  420147 out.go:298] Setting JSON to false
	I0717 18:37:41.487943  420147 mustload.go:65] Loading cluster: ha-445282
	I0717 18:37:41.488051  420147 notify.go:220] Checking for updates...
	I0717 18:37:41.488388  420147 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:41.488408  420147 status.go:255] checking status of ha-445282 ...
	I0717 18:37:41.488932  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.488995  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.511952  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0717 18:37:41.512655  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.513282  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.513303  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.513728  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.513948  420147 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:37:41.515672  420147 status.go:330] ha-445282 host status = "Running" (err=<nil>)
	I0717 18:37:41.515693  420147 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:37:41.516010  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.516090  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.532514  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38391
	I0717 18:37:41.533030  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.533563  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.533587  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.533964  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.534148  420147 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:37:41.536709  420147 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:37:41.537271  420147 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:37:41.537298  420147 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:37:41.537434  420147 host.go:66] Checking if "ha-445282" exists ...
	I0717 18:37:41.537728  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.537763  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.552853  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0717 18:37:41.553215  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.553736  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.553758  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.554064  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.554248  420147 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:37:41.554422  420147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:37:41.554470  420147 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:37:41.556803  420147 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:37:41.557224  420147 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:37:41.557247  420147 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:37:41.557415  420147 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:37:41.557598  420147 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:37:41.557738  420147 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:37:41.557900  420147 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:37:41.645830  420147 ssh_runner.go:195] Run: systemctl --version
	I0717 18:37:41.653198  420147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:37:41.671824  420147 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:37:41.671852  420147 api_server.go:166] Checking apiserver status ...
	I0717 18:37:41.671888  420147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:37:41.690693  420147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4974/cgroup
	W0717 18:37:41.700450  420147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4974/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:37:41.700516  420147 ssh_runner.go:195] Run: ls
	I0717 18:37:41.705318  420147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:37:41.709797  420147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:37:41.709820  420147 status.go:422] ha-445282 apiserver status = Running (err=<nil>)
	I0717 18:37:41.709832  420147 status.go:257] ha-445282 status: &{Name:ha-445282 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:37:41.709857  420147 status.go:255] checking status of ha-445282-m02 ...
	I0717 18:37:41.710143  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.710184  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.725118  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0717 18:37:41.725544  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.726004  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.726023  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.726280  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.726472  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetState
	I0717 18:37:41.728040  420147 status.go:330] ha-445282-m02 host status = "Running" (err=<nil>)
	I0717 18:37:41.728059  420147 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:37:41.728329  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.728360  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.742668  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0717 18:37:41.743062  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.743503  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.743526  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.743850  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.744074  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetIP
	I0717 18:37:41.746686  420147 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:37:41.747174  420147 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:33:02 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:37:41.747202  420147 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:37:41.747332  420147 host.go:66] Checking if "ha-445282-m02" exists ...
	I0717 18:37:41.747729  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.747773  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.762173  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0717 18:37:41.762545  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.762987  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.763009  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.763302  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.763448  420147 main.go:141] libmachine: (ha-445282-m02) Calling .DriverName
	I0717 18:37:41.763614  420147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:37:41.763643  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHHostname
	I0717 18:37:41.766328  420147 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:37:41.766786  420147 main.go:141] libmachine: (ha-445282-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:a9:c1", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:33:02 +0000 UTC Type:0 Mac:52:54:00:a6:a9:c1 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-445282-m02 Clientid:01:52:54:00:a6:a9:c1}
	I0717 18:37:41.766813  420147 main.go:141] libmachine: (ha-445282-m02) DBG | domain ha-445282-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a6:a9:c1 in network mk-ha-445282
	I0717 18:37:41.767065  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHPort
	I0717 18:37:41.767279  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHKeyPath
	I0717 18:37:41.767464  420147 main.go:141] libmachine: (ha-445282-m02) Calling .GetSSHUsername
	I0717 18:37:41.767653  420147 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m02/id_rsa Username:docker}
	I0717 18:37:41.850005  420147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:37:41.870459  420147 kubeconfig.go:125] found "ha-445282" server: "https://192.168.39.254:8443"
	I0717 18:37:41.870488  420147 api_server.go:166] Checking apiserver status ...
	I0717 18:37:41.870518  420147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:37:41.887872  420147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup
	W0717 18:37:41.899292  420147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1419/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:37:41.899345  420147 ssh_runner.go:195] Run: ls
	I0717 18:37:41.904120  420147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 18:37:41.908636  420147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 18:37:41.908655  420147 status.go:422] ha-445282-m02 apiserver status = Running (err=<nil>)
	I0717 18:37:41.908663  420147 status.go:257] ha-445282-m02 status: &{Name:ha-445282-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:37:41.908677  420147 status.go:255] checking status of ha-445282-m04 ...
	I0717 18:37:41.908964  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.909014  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.924259  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0717 18:37:41.924704  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.925157  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.925177  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.925478  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.925715  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetState
	I0717 18:37:41.927216  420147 status.go:330] ha-445282-m04 host status = "Running" (err=<nil>)
	I0717 18:37:41.927235  420147 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:37:41.927515  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.927564  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.943088  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37061
	I0717 18:37:41.943464  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.943930  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.943950  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.944282  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.944554  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetIP
	I0717 18:37:41.946922  420147 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:37:41.947320  420147 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:35:07 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:37:41.947348  420147 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:37:41.947606  420147 host.go:66] Checking if "ha-445282-m04" exists ...
	I0717 18:37:41.947941  420147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:41.947984  420147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:41.962737  420147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39729
	I0717 18:37:41.963101  420147 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:41.963575  420147 main.go:141] libmachine: Using API Version  1
	I0717 18:37:41.963593  420147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:41.963918  420147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:41.964129  420147 main.go:141] libmachine: (ha-445282-m04) Calling .DriverName
	I0717 18:37:41.964326  420147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:37:41.964348  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHHostname
	I0717 18:37:41.967284  420147 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:37:41.967810  420147 main.go:141] libmachine: (ha-445282-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:c4", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:35:07 +0000 UTC Type:0 Mac:52:54:00:a1:60:c4 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-445282-m04 Clientid:01:52:54:00:a1:60:c4}
	I0717 18:37:41.967852  420147 main.go:141] libmachine: (ha-445282-m04) DBG | domain ha-445282-m04 has defined IP address 192.168.39.41 and MAC address 52:54:00:a1:60:c4 in network mk-ha-445282
	I0717 18:37:41.968002  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHPort
	I0717 18:37:41.968145  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHKeyPath
	I0717 18:37:41.968329  420147 main.go:141] libmachine: (ha-445282-m04) Calling .GetSSHUsername
	I0717 18:37:41.968458  420147 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282-m04/id_rsa Username:docker}
	W0717 18:38:00.440766  420147 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.41:22: connect: no route to host
	W0717 18:38:00.440889  420147 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	E0717 18:38:00.440905  420147 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	I0717 18:38:00.440915  420147 status.go:257] ha-445282-m04 status: &{Name:ha-445282-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0717 18:38:00.440960  420147 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-445282 -n ha-445282
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-445282 logs -n 25: (1.797854146s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m04 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp testdata/cp-test.txt                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282:/home/docker/cp-test_ha-445282-m04_ha-445282.txt                       |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282 sudo cat                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282.txt                                 |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m02:/home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m02 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m03:/home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n                                                                 | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | ha-445282-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-445282 ssh -n ha-445282-m03 sudo cat                                          | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC | 17 Jul 24 18:25 UTC |
	|         | /home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-445282 node stop m02 -v=7                                                     | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-445282 node start m02 -v=7                                                    | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-445282 -v=7                                                           | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-445282 -v=7                                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:29 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-445282 --wait=true -v=7                                                    | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-445282                                                                | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	| node    | ha-445282 node delete m03 -v=7                                                   | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-445282 stop -v=7                                                              | ha-445282 | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:31:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:31:15.349819  417974 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:31:15.350332  417974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:15.350350  417974 out.go:304] Setting ErrFile to fd 2...
	I0717 18:31:15.350359  417974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:15.350837  417974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:31:15.351820  417974 out.go:298] Setting JSON to false
	I0717 18:31:15.352878  417974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8018,"bootTime":1721233057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:31:15.352947  417974 start.go:139] virtualization: kvm guest
	I0717 18:31:15.355062  417974 out.go:177] * [ha-445282] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:31:15.356651  417974 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:31:15.356714  417974 notify.go:220] Checking for updates...
	I0717 18:31:15.358908  417974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:31:15.360239  417974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:31:15.361497  417974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:31:15.362814  417974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:31:15.364037  417974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:31:15.365918  417974 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:15.366056  417974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:31:15.366681  417974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:15.366764  417974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:15.383167  417974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0717 18:31:15.383634  417974 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:15.384248  417974 main.go:141] libmachine: Using API Version  1
	I0717 18:31:15.384276  417974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:15.384773  417974 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:15.385042  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.422215  417974 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:31:15.423532  417974 start.go:297] selected driver: kvm2
	I0717 18:31:15.423549  417974 start.go:901] validating driver "kvm2" against &{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:15.423677  417974 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:31:15.424014  417974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:15.424093  417974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:31:15.439117  417974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:31:15.439864  417974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:31:15.439950  417974 cni.go:84] Creating CNI manager for ""
	I0717 18:31:15.439967  417974 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 18:31:15.440042  417974 start.go:340] cluster config:
	{Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:15.440192  417974 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:15.442171  417974 out.go:177] * Starting "ha-445282" primary control-plane node in "ha-445282" cluster
	I0717 18:31:15.443413  417974 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:15.443455  417974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:31:15.443466  417974 cache.go:56] Caching tarball of preloaded images
	I0717 18:31:15.443591  417974 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:31:15.443605  417974 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:31:15.443740  417974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/config.json ...
	I0717 18:31:15.443955  417974 start.go:360] acquireMachinesLock for ha-445282: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:31:15.444006  417974 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "ha-445282"
	I0717 18:31:15.444026  417974 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:31:15.444036  417974 fix.go:54] fixHost starting: 
	I0717 18:31:15.444298  417974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:15.444339  417974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:15.459024  417974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34075
	I0717 18:31:15.459458  417974 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:15.459939  417974 main.go:141] libmachine: Using API Version  1
	I0717 18:31:15.459960  417974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:15.460258  417974 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:15.460461  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.460645  417974 main.go:141] libmachine: (ha-445282) Calling .GetState
	I0717 18:31:15.462563  417974 fix.go:112] recreateIfNeeded on ha-445282: state=Running err=<nil>
	W0717 18:31:15.462582  417974 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:31:15.464709  417974 out.go:177] * Updating the running kvm2 "ha-445282" VM ...
	I0717 18:31:15.465969  417974 machine.go:94] provisionDockerMachine start ...
	I0717 18:31:15.465997  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:31:15.466218  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.468868  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.469341  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.469366  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.469548  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.469743  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.469922  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.470042  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.470201  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.470408  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.470421  417974 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:31:15.578669  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:31:15.578705  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.579006  417974 buildroot.go:166] provisioning hostname "ha-445282"
	I0717 18:31:15.579062  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.579311  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.582401  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.582857  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.582887  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.583000  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.583223  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.583375  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.583497  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.583636  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.583811  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.583822  417974 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-445282 && echo "ha-445282" | sudo tee /etc/hostname
	I0717 18:31:15.708579  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-445282
	
	I0717 18:31:15.708611  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.711369  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.711829  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.711862  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.712089  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.712349  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.712568  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.712755  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.712953  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:15.713251  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:15.713282  417974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-445282' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-445282/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-445282' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:15.821917  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:15.821956  417974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:31:15.822020  417974 buildroot.go:174] setting up certificates
	I0717 18:31:15.822039  417974 provision.go:84] configureAuth start
	I0717 18:31:15.822050  417974 main.go:141] libmachine: (ha-445282) Calling .GetMachineName
	I0717 18:31:15.822359  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:31:15.825046  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.825498  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.825526  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.825675  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.827929  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.828376  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.828398  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.828631  417974 provision.go:143] copyHostCerts
	I0717 18:31:15.828685  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:31:15.828725  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:31:15.828740  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:31:15.828811  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:31:15.828917  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:31:15.828934  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:31:15.828941  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:31:15.828979  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:31:15.829044  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:31:15.829061  417974 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:31:15.829069  417974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:31:15.829109  417974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:31:15.829159  417974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.ha-445282 san=[127.0.0.1 192.168.39.147 ha-445282 localhost minikube]
	I0717 18:31:15.952017  417974 provision.go:177] copyRemoteCerts
	I0717 18:31:15.952079  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:15.952108  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:15.955042  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.955386  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:15.955412  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:15.955565  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:15.955777  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:15.955985  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:15.956249  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:31:16.039403  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:31:16.039488  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:31:16.068546  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:31:16.068646  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 18:31:16.097350  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:31:16.097440  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:31:16.122084  417974 provision.go:87] duration metric: took 300.02862ms to configureAuth
	I0717 18:31:16.122119  417974 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:16.122560  417974 config.go:182] Loaded profile config "ha-445282": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:16.122677  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:31:16.125191  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:16.125605  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:31:16.125636  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:31:16.125790  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:31:16.126006  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:16.126207  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:31:16.126369  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:31:16.126544  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:16.126777  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:31:16.126797  417974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:32:46.960029  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:32:46.960066  417974 machine.go:97] duration metric: took 1m31.494073461s to provisionDockerMachine
	I0717 18:32:46.960097  417974 start.go:293] postStartSetup for "ha-445282" (driver="kvm2")
	I0717 18:32:46.960111  417974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:32:46.960135  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:46.960535  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:32:46.960578  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:46.964198  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:46.964869  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:46.964893  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:46.965072  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:46.965274  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:46.965459  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:46.965594  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.048203  417974 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:32:47.052734  417974 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:32:47.052763  417974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:32:47.052840  417974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:32:47.052931  417974 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:32:47.052944  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:32:47.053054  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:32:47.062755  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:32:47.088671  417974 start.go:296] duration metric: took 128.55067ms for postStartSetup
	I0717 18:32:47.088728  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.089052  417974 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 18:32:47.089102  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.091568  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.091929  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.091952  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.092146  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.092383  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.092579  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.092732  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	W0717 18:32:47.176415  417974 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 18:32:47.176449  417974 fix.go:56] duration metric: took 1m31.732414182s for fixHost
	I0717 18:32:47.176472  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.179208  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.179518  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.179549  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.179769  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.179995  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.180195  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.180398  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.180553  417974 main.go:141] libmachine: Using SSH client type: native
	I0717 18:32:47.180763  417974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0717 18:32:47.180777  417974 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:32:47.289473  417974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241167.251273522
	
	I0717 18:32:47.289507  417974 fix.go:216] guest clock: 1721241167.251273522
	I0717 18:32:47.289515  417974 fix.go:229] Guest: 2024-07-17 18:32:47.251273522 +0000 UTC Remote: 2024-07-17 18:32:47.176455495 +0000 UTC m=+91.865165448 (delta=74.818027ms)
	I0717 18:32:47.289545  417974 fix.go:200] guest clock delta is within tolerance: 74.818027ms
	I0717 18:32:47.289554  417974 start.go:83] releasing machines lock for "ha-445282", held for 1m31.845536108s
	I0717 18:32:47.289676  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.289974  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:32:47.292370  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.292779  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.292810  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.292968  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293498  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293708  417974 main.go:141] libmachine: (ha-445282) Calling .DriverName
	I0717 18:32:47.293822  417974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:32:47.293866  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.293955  417974 ssh_runner.go:195] Run: cat /version.json
	I0717 18:32:47.294010  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHHostname
	I0717 18:32:47.297101  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297513  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.297549  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297658  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.297680  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.297870  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.298013  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.298088  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:47.298146  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:47.298160  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.298258  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHPort
	I0717 18:32:47.298427  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHKeyPath
	I0717 18:32:47.298586  417974 main.go:141] libmachine: (ha-445282) Calling .GetSSHUsername
	I0717 18:32:47.298739  417974 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/ha-445282/id_rsa Username:docker}
	I0717 18:32:47.398465  417974 ssh_runner.go:195] Run: systemctl --version
	I0717 18:32:47.404972  417974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:32:47.566918  417974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:32:47.575381  417974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:32:47.575460  417974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:32:47.585666  417974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:32:47.585692  417974 start.go:495] detecting cgroup driver to use...
	I0717 18:32:47.585752  417974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:32:47.602578  417974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:32:47.616547  417974 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:32:47.616603  417974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:32:47.630572  417974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:32:47.645635  417974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:32:47.808451  417974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:32:47.963011  417974 docker.go:233] disabling docker service ...
	I0717 18:32:47.963094  417974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:32:47.983633  417974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:32:48.000804  417974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:32:48.174007  417974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:32:48.320071  417974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:32:48.336089  417974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:32:48.355154  417974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:32:48.355215  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.366769  417974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:32:48.366835  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.378210  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.388824  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.399726  417974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:32:48.410860  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.421790  417974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.432509  417974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:32:48.443176  417974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:32:48.452986  417974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:32:48.462988  417974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:48.614270  417974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:32:50.711056  417974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.096733835s)
	I0717 18:32:50.711104  417974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:32:50.711175  417974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:32:50.716284  417974 start.go:563] Will wait 60s for crictl version
	I0717 18:32:50.716349  417974 ssh_runner.go:195] Run: which crictl
	I0717 18:32:50.720257  417974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:32:50.764053  417974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:32:50.764160  417974 ssh_runner.go:195] Run: crio --version
	I0717 18:32:50.793316  417974 ssh_runner.go:195] Run: crio --version
	I0717 18:32:50.823476  417974 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:32:50.824801  417974 main.go:141] libmachine: (ha-445282) Calling .GetIP
	I0717 18:32:50.827602  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:50.828036  417974 main.go:141] libmachine: (ha-445282) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:00:89", ip: ""} in network mk-ha-445282: {Iface:virbr1 ExpiryTime:2024-07-17 19:21:11 +0000 UTC Type:0 Mac:52:54:00:1e:00:89 Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-445282 Clientid:01:52:54:00:1e:00:89}
	I0717 18:32:50.828063  417974 main.go:141] libmachine: (ha-445282) DBG | domain ha-445282 has defined IP address 192.168.39.147 and MAC address 52:54:00:1e:00:89 in network mk-ha-445282
	I0717 18:32:50.828222  417974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:32:50.833105  417974 kubeadm.go:883] updating cluster {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:32:50.833292  417974 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:32:50.833351  417974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:32:50.882161  417974 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:32:50.882187  417974 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:32:50.882246  417974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:32:50.918801  417974 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:32:50.918832  417974 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:32:50.918843  417974 kubeadm.go:934] updating node { 192.168.39.147 8443 v1.30.2 crio true true} ...
	I0717 18:32:50.918962  417974 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-445282 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:32:50.919040  417974 ssh_runner.go:195] Run: crio config
	I0717 18:32:50.971008  417974 cni.go:84] Creating CNI manager for ""
	I0717 18:32:50.971032  417974 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 18:32:50.971051  417974 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:32:50.971075  417974 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.147 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-445282 NodeName:ha-445282 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:32:50.971246  417974 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-445282"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:32:50.971276  417974 kube-vip.go:115] generating kube-vip config ...
	I0717 18:32:50.971327  417974 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 18:32:50.984148  417974 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 18:32:50.984281  417974 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 18:32:50.984360  417974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:32:50.994594  417974 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:32:50.994674  417974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 18:32:51.004637  417974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 18:32:51.021125  417974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:32:51.037466  417974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 18:32:51.054162  417974 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 18:32:51.073237  417974 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 18:32:51.077095  417974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:51.228071  417974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:51.243845  417974 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282 for IP: 192.168.39.147
	I0717 18:32:51.243870  417974 certs.go:194] generating shared ca certs ...
	I0717 18:32:51.243887  417974 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.244047  417974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:32:51.244090  417974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:32:51.244099  417974 certs.go:256] generating profile certs ...
	I0717 18:32:51.244181  417974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/client.key
	I0717 18:32:51.244209  417974 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e
	I0717 18:32:51.244224  417974 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.147 192.168.39.198 192.168.39.214 192.168.39.254]
	I0717 18:32:51.360280  417974 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e ...
	I0717 18:32:51.360320  417974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e: {Name:mkf49a6ec11aa829e1269ba54cc0595eb1191166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.360515  417974 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e ...
	I0717 18:32:51.360531  417974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e: {Name:mk415e9bf668acc349201fe00a8a04c4a6d6499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:51.360618  417974 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt.6292725e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt
	I0717 18:32:51.360778  417974 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key.6292725e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key
	I0717 18:32:51.360916  417974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key
	I0717 18:32:51.360931  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:32:51.360944  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:32:51.360954  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:32:51.360966  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:32:51.360975  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:32:51.360986  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:32:51.360995  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:32:51.361007  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:32:51.361065  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:32:51.361095  417974 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:32:51.361102  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:32:51.361122  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:32:51.361144  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:32:51.361163  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:32:51.361199  417974 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:32:51.361221  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.361244  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.361260  417974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.361856  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:32:51.388667  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:32:51.413024  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:32:51.437128  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:32:51.461105  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:32:51.485062  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:32:51.507823  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:32:51.530300  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/ha-445282/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:32:51.553251  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:32:51.575882  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:32:51.600808  417974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:32:51.624081  417974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:32:51.640316  417974 ssh_runner.go:195] Run: openssl version
	I0717 18:32:51.647532  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:32:51.658873  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.663281  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.663357  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:32:51.669147  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:32:51.679216  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:32:51.690863  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.695546  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.695613  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:32:51.701213  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:32:51.710722  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:32:51.722754  417974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.727230  417974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.727287  417974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:32:51.732805  417974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:32:51.742316  417974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:32:51.746852  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:32:51.752601  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:32:51.757953  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:32:51.763277  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:32:51.768648  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:32:51.774146  417974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:32:51.779895  417974 kubeadm.go:392] StartCluster: {Name:ha-445282 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-445282 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.147 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.214 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.41 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:32:51.780024  417974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:32:51.780074  417974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:32:51.819398  417974 cri.go:89] found id: "386a254963e27b5bc5449d987aac00c9a82c99a9a2e22bec541039092c57f295"
	I0717 18:32:51.819427  417974 cri.go:89] found id: "f2993e8e42bef1aa7bfbe816a678cba116cffcb0e26b47eaa3660a52f5aa2914"
	I0717 18:32:51.819435  417974 cri.go:89] found id: "5a94a87a35e84533ba262a0519c0e6c3520cb95c10257b1549084c0e27ce453c"
	I0717 18:32:51.819439  417974 cri.go:89] found id: "54ce94edc90340e3fecdf7e9c373bf97b043857f76676c04f062a075824d8435"
	I0717 18:32:51.819443  417974 cri.go:89] found id: "408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570"
	I0717 18:32:51.819448  417974 cri.go:89] found id: "9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1"
	I0717 18:32:51.819452  417974 cri.go:89] found id: "6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22"
	I0717 18:32:51.819456  417974 cri.go:89] found id: "ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0"
	I0717 18:32:51.819460  417974 cri.go:89] found id: "ac29ebebce0938fd21e40b0afaed55120b3a90091496f7e0bb354f366e3983d1"
	I0717 18:32:51.819470  417974 cri.go:89] found id: "09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943"
	I0717 18:32:51.819474  417974 cri.go:89] found id: "608260c5da2653858a3ba5ed68d5d0fd133359fe2d82577c89dd208d1fd4061a"
	I0717 18:32:51.819478  417974 cri.go:89] found id: "f910525936daaedaf4fb3cce81ed7e6f3f6fb3c9cf2aa2ba7e26987a717c5b8b"
	I0717 18:32:51.819481  417974 cri.go:89] found id: "585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65"
	I0717 18:32:51.819485  417974 cri.go:89] found id: ""
	I0717 18:32:51.819541  417974 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.083285926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241481083264314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f9a289f-623d-4d58-a76b-e0622251fe44 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.083997446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5e90212-e60f-41b2-9146-2616eedbbc88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.084151734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5e90212-e60f-41b2-9146-2616eedbbc88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.084601273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5e90212-e60f-41b2-9146-2616eedbbc88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.127901160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af652d4e-516f-4330-a115-a374a663ae20 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.127994243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af652d4e-516f-4330-a115-a374a663ae20 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.129128142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c32dab2-7774-4775-afd0-8a0474c7b797 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.129714216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241481129686535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c32dab2-7774-4775-afd0-8a0474c7b797 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.130223030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b7789ad-c2e8-409b-9dd2-91eefec9f12b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.130307543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b7789ad-c2e8-409b-9dd2-91eefec9f12b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.130774213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b7789ad-c2e8-409b-9dd2-91eefec9f12b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.175996141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a30c7734-6049-4136-a9e0-275b5e9fdeda name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.176086228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a30c7734-6049-4136-a9e0-275b5e9fdeda name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.177672407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c19b92c-3708-47f0-b4a7-407a4ae413e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.178159234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241481178134571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c19b92c-3708-47f0-b4a7-407a4ae413e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.178901336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d62d24a-a265-4028-b5d4-4ad635ed0270 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.178975855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d62d24a-a265-4028-b5d4-4ad635ed0270 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.179501820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d62d24a-a265-4028-b5d4-4ad635ed0270 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.225798937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c1f5468-f31a-4aed-a61d-6d3499c48e1c name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.226081244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c1f5468-f31a-4aed-a61d-6d3499c48e1c name=/runtime.v1.RuntimeService/Version
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.227293059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=158e2743-40ef-4630-a8f1-303b38e57c03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.227792555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241481227768379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=158e2743-40ef-4630-a8f1-303b38e57c03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.228205998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3427e3d-bc07-438e-960a-97a7a66cc750 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.228258760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3427e3d-bc07-438e-960a-97a7a66cc750 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:38:01 ha-445282 crio[3743]: time="2024-07-17 18:38:01.228717863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1a00a7846ea188ac97805c8a904c4e7db5546adbd3c6427366a5e18765f00230,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241274585957256,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067,PodSandboxId:fb722c49526acb9b63a3500281d7f12c21959c411b4f5daccf0a4b5c1d2f1f18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721241218591112768,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae931c3b-8935-481d-bef4-0b05dad8c915,},Annotations:map[string]string{io.kubernetes.container.hash: 45a25f29,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241217580315066,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1c461327466076142ce828af6145a0cd0d44d73409fe0f62b672a81260781ee,PodSandboxId:e9f8c63ebeab85911ed14c742621ec82efec6902e573dc16676f8e4082ab5c07,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721241210869764251,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotations:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241210077642624,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3563496ca45d287f33046dd2e085c8c769e7441f49b2478272464ce6624cfd9,PodSandboxId:fdcd052ef590ce54260d006ec784a364080bf1d43e6192285e69b3fb59d36bf8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721241192458051681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e18a23f8599513addef6c2bfc7f909b3,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954,PodSandboxId:4f67bfed73cfa2fce6cb36d8a4321f2872b6981de5bb1913a4ebb287f6b6f4b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177782178274,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":91
53,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f,PodSandboxId:9617b3bb8c48cea8a2cc45453fd0391da35dba2a4551bc580cd4c08a5c0c2068,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241177723062771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 3e8405ae,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51,PodSandboxId:c704554e3847d95caa225b7cc2144d3bd3736cd0216e1fc568a04c6b9667ecdf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721241177668087221,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,
io.kubernetes.pod.name: kube-controller-manager-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b71086ebffd4e15bc7c5f6152b697200,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074,PodSandboxId:6acf1b0fa81b2f5c2a3e6a4b86384528fe7eba7b42939d345a8cbf01e8b0f2cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241177665973684,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes
.pod.name: kube-apiserver-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 058431b563c109d1ce3751345314cdc4,},Annotations:map[string]string{io.kubernetes.container.hash: 72596726,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf,PodSandboxId:97e958e9cbe30dc85b6498d32e37266e72d4dd032dab0e75dd9293d9dd129709,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721241177505092448,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434,PodSandboxId:94c07421a1bf567b9e9b1f4650f4b35916c572882c20a54c7b9c60a7c3c7010a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721241177553592763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd,PodSandboxId:858a82ac2c20ee06b90789d56921732f764623a5f5880f67c9cfa15a23be55b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241177348749770,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073,PodSandboxId:57a78f36912ceea70cc3c12d8156b382a7db9d300b401d1151aa520820775c06,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241177289847854,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46bb59b8c88a5f72356d7eab6e299cb49357832b2f32f9da4d688f440d7708de,PodSandboxId:c6775eb0d598035f8cd74b757ae38e81e954dc7f515089267a841fa0e9cb45be,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721240671679911058,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mcsw8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 727368ca-3135-44f6-93b1-5cfb12476236,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eacb59a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570,PodSandboxId:7904758cf99a7ab28546eb8985ee7b046204d30d1edf39094c972ed389e5fbd4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530705505219,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rzxbr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9630d87d-3470-4675-9b3c-a10ff614f5e1,},Annotations:map[string]string{io.kubern
etes.container.hash: 3e8405ae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1,PodSandboxId:1b4104fef2abaea24a96f4b40a7ae8dfd47c5d0b44c0b88ab5fd54254951ddff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240530698790877,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-28njs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8f2f11-c89c-42ae-829a-e2cf1dea11b6,},Annotations:map[string]string{io.kubernetes.container.hash: c4ea224,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22,PodSandboxId:ea48366339cf7e3949139c7e70a94f474f735581280c6ec1323d8b6403124191,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721240518897930504,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-75gcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 872c1132-e584-47c1-a873-74615d52511b,},Annotations:map[string]string{io.kubernetes.container.hash: fa6ac71a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0,PodSandboxId:9798b06dd09f98ca5f7cd1bfbfde8d398337d482475c16fb27417fc47dc574b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721240514654035048,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vxmp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cca555da-b93a-430c-8fbe-7e732af65a3a,},Annotations:map[string]string{io.kubernetes.container.hash: 56ae3158,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943,PodSandboxId:d2f7bf6b169d4d9ca65b56d285cee83b77ebe598e1560374d9f2397db27fe0fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721240493481184747,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5611ca3ae268bab43701867e47a0324e,},Annotations:map[string]string{io.kubernetes.container.hash: 9287e64f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65,PodSandboxId:b5b8e1d746c8d2a45352b8a3ad8ed98ccc12e52438cfffc99ed7b3e0d101f57b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721240493391039760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-445282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e44b0150b917f8f54d6a478ddc641,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3427e3d-bc07-438e-960a-97a7a66cc750 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a00a7846ea18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   fb722c49526ac       storage-provisioner
	ef4c83460a423       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   fb722c49526ac       storage-provisioner
	36072ba2e3068       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            3                   6acf1b0fa81b2       kube-apiserver-ha-445282
	c1c4613274660       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   e9f8c63ebeab8       busybox-fc5497c4f-mcsw8
	815de2dec486d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   2                   c704554e3847d       kube-controller-manager-ha-445282
	c3563496ca45d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   fdcd052ef590c       kube-vip-ha-445282
	d81bc3c984acc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   4f67bfed73cfa       coredns-7db6d8ff4d-28njs
	171da935275e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9617b3bb8c48c       coredns-7db6d8ff4d-rzxbr
	e35112af49fc7       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      5 minutes ago       Exited              kube-controller-manager   1                   c704554e3847d       kube-controller-manager-ha-445282
	e025856fa899d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      5 minutes ago       Exited              kube-apiserver            2                   6acf1b0fa81b2       kube-apiserver-ha-445282
	81e0f6fe1021c       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                1                   94c07421a1bf5       kube-proxy-vxmp8
	ae9c1607affb3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      5 minutes ago       Running             kindnet-cni               1                   97e958e9cbe30       kindnet-75gcw
	43252ed2b3b54       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      5 minutes ago       Running             kube-scheduler            1                   858a82ac2c20e       kube-scheduler-ha-445282
	a26c2a38c92d3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   57a78f36912ce       etcd-ha-445282
	46bb59b8c88a5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   c6775eb0d5980       busybox-fc5497c4f-mcsw8
	408ccf9c4f5cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   7904758cf99a7       coredns-7db6d8ff4d-rzxbr
	9c8f03436294a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   1b4104fef2aba       coredns-7db6d8ff4d-28njs
	6e8619164a43b       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    16 minutes ago      Exited              kindnet-cni               0                   ea48366339cf7       kindnet-75gcw
	ab95f55f84d8d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      16 minutes ago      Exited              kube-proxy                0                   9798b06dd09f9       kube-proxy-vxmp8
	09fdf7de5bf8c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   d2f7bf6b169d4       etcd-ha-445282
	585303a41caea       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      16 minutes ago      Exited              kube-scheduler            0                   b5b8e1d746c8d       kube-scheduler-ha-445282
	
	
	==> coredns [171da935275e881bc54c6cde276f1768824f6d252865065adf30a82952618b4f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[529904031]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 18:33:06.720) (total time: 10001ms):
	Trace[529904031]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:33:16.722)
	Trace[529904031]: [10.001905183s] [10.001905183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:42966->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:42966->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [408ccf9c4f5cbf7c435a49cbc548ab74cfb3edb5ff5245898a3d2efe25803570] <==
	[INFO] 10.244.1.2:42067 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193888s
	[INFO] 10.244.1.2:38612 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103227s
	[INFO] 10.244.0.4:44523 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001703135s
	[INFO] 10.244.0.4:59477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107361s
	[INFO] 10.244.0.4:56198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108839s
	[INFO] 10.244.0.4:38398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00004501s
	[INFO] 10.244.0.4:41070 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061061s
	[INFO] 10.244.2.2:37193 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00186169s
	[INFO] 10.244.2.2:47175 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001259008s
	[INFO] 10.244.2.2:43118 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117844s
	[INFO] 10.244.2.2:43940 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104875s
	[INFO] 10.244.1.2:43839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163961s
	[INFO] 10.244.1.2:57262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014754s
	[INFO] 10.244.1.2:59861 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089161s
	[INFO] 10.244.0.4:35507 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101753s
	[INFO] 10.244.0.4:50990 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000048865s
	[INFO] 10.244.2.2:35692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101106s
	[INFO] 10.244.2.2:47438 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106571s
	[INFO] 10.244.0.4:37290 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140704s
	[INFO] 10.244.0.4:37755 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145358s
	[INFO] 10.244.2.2:58729 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097845s
	[INFO] 10.244.2.2:47405 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008526s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1859&timeout=7m43s&timeoutSeconds=463&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [9c8f03436294a943982c955d41f006ae30ae88c5b9d1067201c1543122f3ffc1] <==
	[INFO] 10.244.1.2:59627 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.080250702s
	[INFO] 10.244.0.4:51929 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136107s
	[INFO] 10.244.0.4:36818 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096811s
	[INFO] 10.244.0.4:42583 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001301585s
	[INFO] 10.244.2.2:59932 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203977s
	[INFO] 10.244.2.2:50906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207365s
	[INFO] 10.244.2.2:41438 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168363s
	[INFO] 10.244.2.2:47479 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000170645s
	[INFO] 10.244.1.2:54595 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000208251s
	[INFO] 10.244.0.4:34251 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081496s
	[INFO] 10.244.0.4:35201 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063768s
	[INFO] 10.244.2.2:50926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154679s
	[INFO] 10.244.2.2:39243 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122767s
	[INFO] 10.244.1.2:50770 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014514s
	[INFO] 10.244.1.2:37706 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166071s
	[INFO] 10.244.1.2:53197 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000306441s
	[INFO] 10.244.1.2:34142 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128366s
	[INFO] 10.244.0.4:60617 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102661s
	[INFO] 10.244.0.4:54474 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060033s
	[INFO] 10.244.2.2:50977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014662s
	[INFO] 10.244.2.2:58773 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013261s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1896&timeout=6m38s&timeoutSeconds=398&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1892&timeout=5m22s&timeoutSeconds=322&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [d81bc3c984accc8863f08c5dd41eaeb884cb21afaec241ca9f8f106e49ca4954] <==
	Trace[339431070]: [10.001650188s] [10.001650188s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[733380150]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 18:33:07.161) (total time: 10000ms):
	Trace[733380150]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (18:33:17.162)
	Trace[733380150]: [10.000989737s] [10.000989737s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44808->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44808->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-445282
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_21_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:21:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:37:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:33:40 +0000   Wed, 17 Jul 2024 18:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-445282
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1ea799c4fd84c5c8c95385b6a2349f7
	  System UUID:                d1ea799c-4fd8-4c5c-8c95-385b6a2349f7
	  Boot ID:                    58e8f531-06d1-4b66-9fa8-93cd9d417ce6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mcsw8              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-28njs             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-rzxbr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-445282                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-75gcw                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-445282             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-445282    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-vxmp8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-445282             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-445282                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m18s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-445282 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-445282 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-445282 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-445282 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Warning  ContainerGCFailed        5m22s (x2 over 6m22s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-445282 event: Registered Node ha-445282 in Controller
	
	
	Name:               ha-445282-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_22_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:22:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:37:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:34:22 +0000   Wed, 17 Jul 2024 18:33:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-445282-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dee104babdb45fe968765f68a06ccd6
	  System UUID:                5dee104b-abdb-45fe-9687-65f68a06ccd6
	  Boot ID:                    b905f24f-7dbb-4fff-9e1c-fcdeea9c5023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blwvw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-445282-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-mdqdz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-445282-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-445282-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-xs65r                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-445282-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-445282-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-445282-m02 status is now: NodeNotReady
	  Normal  Starting                 4m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s (x8 over 4m48s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s (x8 over 4m48s)  kubelet          Node ha-445282-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s (x7 over 4m48s)  kubelet          Node ha-445282-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-445282-m02 event: Registered Node ha-445282-m02 in Controller
	
	
	Name:               ha-445282-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-445282-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=ha-445282
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_25_05_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:25:05 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-445282-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:35:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:36:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:36:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:36:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 18:35:13 +0000   Wed, 17 Jul 2024 18:36:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-445282-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55cbb1c4afb849b39c587987c52eb826
	  System UUID:                55cbb1c4-afb8-49b3-9c58-7987c52eb826
	  Boot ID:                    df7937d5-dc45-4a61-8a87-51d72c4268f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-77t6b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-nx7rb              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-jstdw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-445282-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   NodeNotReady             3m34s                  node-controller  Node ha-445282-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-445282-m04 event: Registered Node ha-445282-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-445282-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-445282-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-445282-m04 has been rebooted, boot id: df7937d5-dc45-4a61-8a87-51d72c4268f1
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-445282-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s                   node-controller  Node ha-445282-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.891308] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059987] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.193800] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.120214] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.274662] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.047178] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.805512] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.055406] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.996103] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.082270] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.197381] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.192890] kauditd_printk_skb: 34 callbacks suppressed
	[Jul17 18:22] kauditd_printk_skb: 24 callbacks suppressed
	[Jul17 18:32] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.156972] systemd-fstab-generator[3675]: Ignoring "noauto" option for root device
	[  +0.202401] systemd-fstab-generator[3689]: Ignoring "noauto" option for root device
	[  +0.158844] systemd-fstab-generator[3701]: Ignoring "noauto" option for root device
	[  +0.289910] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[  +2.618857] systemd-fstab-generator[3828]: Ignoring "noauto" option for root device
	[  +5.877742] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 18:33] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.064868] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.240777] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.384053] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [09fdf7de5bf8ce9446bbf806731965f941aad214e7e235e058e07be242ccc943] <==
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/17 18:31:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T18:31:16.317849Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.147:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:31:16.317951Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.147:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:31:16.318046Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c194f0f1585e7a7d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T18:31:16.318317Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318356Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318405Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318627Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318697Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318731Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"47d74de991c9c59d"}
	{"level":"info","ts":"2024-07-17T18:31:16.318758Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318798Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318903Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.318951Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.319032Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.319067Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:31:16.321712Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-07-17T18:31:16.321955Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.147:2380"}
	{"level":"info","ts":"2024-07-17T18:31:16.322048Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-445282","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.147:2380"],"advertise-client-urls":["https://192.168.39.147:2379"]}
	
	
	==> etcd [a26c2a38c92d350a1610d0d12459f90946d841ddbfa020ed8dab89d6a0190073] <==
	{"level":"info","ts":"2024-07-17T18:34:35.384665Z","caller":"traceutil/trace.go:171","msg":"trace[1956142068] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2347; }","duration":"105.866973ms","start":"2024-07-17T18:34:35.278788Z","end":"2024-07-17T18:34:35.384655Z","steps":["trace[1956142068] 'agreement among raft nodes before linearized reading'  (duration: 105.804466ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:34:36.095399Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.095716Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.096263Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.124208Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c194f0f1585e7a7d","to":"e193095643f373c5","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T18:34:36.124367Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:34:36.137094Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c194f0f1585e7a7d","to":"e193095643f373c5","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T18:34:36.13723Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.407669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c194f0f1585e7a7d switched to configuration voters=(5176691962254312861 13949038865233640061)"}
	{"level":"info","ts":"2024-07-17T18:35:27.409817Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"582b8c8375119e1d","local-member-id":"c194f0f1585e7a7d","removed-remote-peer-id":"e193095643f373c5","removed-remote-peer-urls":["https://192.168.39.214:2380"]}
	{"level":"info","ts":"2024-07-17T18:35:27.409906Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.410613Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.410678Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.413557Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.413696Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.415566Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.415876Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5","error":"context canceled"}
	{"level":"warn","ts":"2024-07-17T18:35:27.415981Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e193095643f373c5","error":"failed to read e193095643f373c5 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-17T18:35:27.416116Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.416584Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T18:35:27.416709Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c194f0f1585e7a7d","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.416748Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e193095643f373c5"}
	{"level":"info","ts":"2024-07-17T18:35:27.416864Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c194f0f1585e7a7d","removed-remote-peer-id":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.43552Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c194f0f1585e7a7d","remote-peer-id-stream-handler":"c194f0f1585e7a7d","remote-peer-id-from":"e193095643f373c5"}
	{"level":"warn","ts":"2024-07-17T18:35:27.437255Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.214:34434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:38:01 up 17 min,  0 users,  load average: 0.16, 0.28, 0.24
	Linux ha-445282 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6e8619164a43b2094eae58e2785e6b72eb30e667510fe01ecf9aeb78b6f16f22] <==
	I0717 18:30:50.000035       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:30:50.000209       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:30:50.000493       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:30:50.000538       1 main.go:303] handling current node
	I0717 18:30:50.000589       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:30:50.000607       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:30:50.000695       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:30:50.000714       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:30:59.993772       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:30:59.993805       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:30:59.994002       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:30:59.994031       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:30:59.994083       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:30:59.994105       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:30:59.994170       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:30:59.994192       1 main.go:303] handling current node
	E0717 18:31:09.573998       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1872&timeout=8m13s&timeoutSeconds=493&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	I0717 18:31:09.994558       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:31:09.994616       1 main.go:303] handling current node
	I0717 18:31:09.994831       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:31:09.994848       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:31:09.995079       1 main.go:299] Handling node with IPs: map[192.168.39.214:{}]
	I0717 18:31:09.995116       1 main.go:326] Node ha-445282-m03 has CIDR [10.244.2.0/24] 
	I0717 18:31:09.995233       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:31:09.995261       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ae9c1607affb386fbb47a0752c97c15fb1c66f8d3d004233562d1837b44d8fcf] <==
	I0717 18:37:18.711295       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:37:28.710589       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:37:28.710720       1 main.go:303] handling current node
	I0717 18:37:28.710772       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:37:28.710820       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:37:28.711040       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:37:28.711096       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:37:38.711653       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:37:38.711798       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:37:38.712082       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:37:38.712126       1 main.go:303] handling current node
	I0717 18:37:38.712142       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:37:38.712147       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:37:48.712972       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:37:48.713120       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:37:48.713305       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:37:48.713331       1 main.go:303] handling current node
	I0717 18:37:48.713357       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:37:48.713374       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	I0717 18:37:58.710663       1 main.go:299] Handling node with IPs: map[192.168.39.41:{}]
	I0717 18:37:58.710853       1 main.go:326] Node ha-445282-m04 has CIDR [10.244.3.0/24] 
	I0717 18:37:58.711075       1 main.go:299] Handling node with IPs: map[192.168.39.147:{}]
	I0717 18:37:58.711139       1 main.go:303] handling current node
	I0717 18:37:58.711173       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:37:58.711278       1 main.go:326] Node ha-445282-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [36072ba2e30683562025920089bc3c181de035cdc0c1e1f74c1ffd635cf5ecbe] <==
	I0717 18:33:39.462464       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 18:33:39.462476       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 18:33:39.550595       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:33:39.551773       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:33:39.552399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:33:39.552520       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:33:39.552550       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:33:39.553343       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:33:39.558249       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 18:33:39.562458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:33:39.562550       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:33:39.562567       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:33:39.562572       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:33:39.562577       1 cache.go:39] Caches are synced for autoregister controller
	W0717 18:33:39.580665       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.198 192.168.39.214]
	I0717 18:33:39.581614       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:33:39.592787       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:33:39.592919       1 policy_source.go:224] refreshing policies
	I0717 18:33:39.633972       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:33:39.683397       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:33:39.691180       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 18:33:39.695323       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 18:33:40.454267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 18:33:41.026298       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.198 192.168.39.214]
	W0717 18:33:51.020659       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.198]
	
	
	==> kube-apiserver [e025856fa899dfbee6b276dde299e1b65214e1e2d733ea40a6d59431b5954074] <==
	I0717 18:32:58.382283       1 options.go:221] external host was not specified, using 192.168.39.147
	I0717 18:32:58.384712       1 server.go:148] Version: v1.30.2
	I0717 18:32:58.385755       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:58.898303       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 18:32:58.902775       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:32:58.906127       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 18:32:58.906271       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 18:32:58.906744       1 instance.go:299] Using reconciler: lease
	W0717 18:33:18.897607       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 18:33:18.897719       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 18:33:18.908094       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 18:33:18.908129       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [815de2dec486d9906f6fe63a85a1a5d02a65a60d5e0eb7857d79a62f6d774fe3] <==
	I0717 18:35:24.106884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.192909ms"
	I0717 18:35:24.127325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.372891ms"
	I0717 18:35:24.252335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.505511ms"
	I0717 18:35:24.333235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.549977ms"
	I0717 18:35:24.394842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.537117ms"
	I0717 18:35:24.394959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.597µs"
	I0717 18:35:26.324659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.073µs"
	I0717 18:35:26.569826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.5µs"
	I0717 18:35:26.588100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.291µs"
	I0717 18:35:26.592940       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.618µs"
	I0717 18:35:29.818176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.210366ms"
	I0717 18:35:29.818287       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.258µs"
	I0717 18:35:38.888693       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-445282-m04"
	E0717 18:35:51.517849       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:35:51.517966       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:35:51.517992       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:35:51.518015       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:35:51.518038       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:36:11.518870       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:36:11.519001       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:36:11.519028       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:36:11.519051       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	E0717 18:36:11.519079       1 gc_controller.go:153] "Failed to get node" err="node \"ha-445282-m03\" not found" logger="pod-garbage-collector-controller" node="ha-445282-m03"
	I0717 18:36:16.982334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.626877ms"
	I0717 18:36:16.982619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.502µs"
	
	
	==> kube-controller-manager [e35112af49fc7ce97d4f130139890ee9f8148cc8736a71efd3773020cbff2c51] <==
	I0717 18:32:58.869210       1 serving.go:380] Generated self-signed cert in-memory
	I0717 18:32:59.159486       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 18:32:59.159583       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:59.161160       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 18:32:59.161568       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 18:32:59.161582       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 18:32:59.161597       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 18:33:19.914169       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.147:8443/healthz\": dial tcp 192.168.39.147:8443: connect: connection refused"
	
	
	==> kube-proxy [81e0f6fe1021c8b008cc337d128d6aa3bc8d47901d78a8033d64c9e2d253d434] <==
	I0717 18:32:59.235401       1 server_linux.go:69] "Using iptables proxy"
	E0717 18:33:00.038232       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:03.111707       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:06.182392       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:12.327854       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 18:33:24.613915       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-445282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 18:33:42.807993       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.147"]
	I0717 18:33:42.845715       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:33:42.845796       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:33:42.845822       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:33:42.848852       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:33:42.849122       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:33:42.849149       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:33:42.850714       1 config.go:192] "Starting service config controller"
	I0717 18:33:42.850770       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:33:42.850798       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:33:42.850818       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:33:42.851675       1 config.go:319] "Starting node config controller"
	I0717 18:33:42.851723       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:33:42.950976       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:33:42.951037       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:33:42.952508       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ab95f55f84d8db03b0d3f835c0c5eab06be12e88ce02112b43472ec6c464c6d0] <==
	E0717 18:30:01.862614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.933928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.934278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:04.934536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:04.934607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:11.079483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:11.079600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:11.079515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:11.079774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:14.163064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:14.163657       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:20.295355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:20.295451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:23.366093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:23.366150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:29.511120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:29.511198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:38.726673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:38.726789       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-445282&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:41.798821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:41.798869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1812": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 18:30:47.942170       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 18:30:47.942247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [43252ed2b3b541f7b1a8cd399b9098b6c0b973167fde832f33cc5504198cd6fd] <==
	W0717 18:33:29.792601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.147:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:29.792653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.147:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:30.399986       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.147:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:30.400234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.147:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:33.737704       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.147:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:33.737825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.147:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:35.099939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:35.100010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:35.783175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.147:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:35.783335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.147:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:36.315631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.147:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:36.315816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.147:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:36.540228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.147:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:36.540307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.147:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:37.199145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.147:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	E0717 18:33:37.199272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.147:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.147:8443: connect: connection refused
	W0717 18:33:39.481357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:33:39.481539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:33:39.481747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:33:39.481848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:33:39.481983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:33:39.482075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:33:39.482198       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:33:39.482315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0717 18:33:40.723687       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [585303a41caea4bbfa8907c8b3b2d134a2f1c5c29f6f5a8eb0d4369fdb534d65] <==
	W0717 18:31:12.565002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:31:12.565168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:31:12.608786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:12.608884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:12.763667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:12.763708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:12.894632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:31:12.894685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:31:12.896005       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:31:12.896118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:31:13.089839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:31:13.089893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:31:13.222876       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:13.222924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:13.260627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:31:13.260660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:31:13.279645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:13.279675       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:31:13.678514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:31:13.678654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:31:14.025047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:31:14.025142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:31:16.201762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:16.201817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:31:16.236579       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 18:34:20 ha-445282 kubelet[1382]: E0717 18:34:20.568997    1382 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ae931c3b-8935-481d-bef4-0b05dad8c915)\"" pod="kube-system/storage-provisioner" podUID="ae931c3b-8935-481d-bef4-0b05dad8c915"
	Jul 17 18:34:32 ha-445282 kubelet[1382]: I0717 18:34:32.569216    1382 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-445282" podUID="ca5bcedd-e43a-4711-bdfc-dc1c2c524d86"
	Jul 17 18:34:32 ha-445282 kubelet[1382]: I0717 18:34:32.589509    1382 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-445282"
	Jul 17 18:34:34 ha-445282 kubelet[1382]: I0717 18:34:34.568792    1382 scope.go:117] "RemoveContainer" containerID="ef4c83460a4233c24b932a08223b2c48f01338e960d513729d2cfe392d618067"
	Jul 17 18:34:35 ha-445282 kubelet[1382]: I0717 18:34:35.573941    1382 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-445282" podStartSLOduration=3.573915595 podStartE2EDuration="3.573915595s" podCreationTimestamp="2024-07-17 18:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:34:35.573682727 +0000 UTC m=+776.126606792" watchObservedRunningTime="2024-07-17 18:34:35.573915595 +0000 UTC m=+776.126839641"
	Jul 17 18:34:39 ha-445282 kubelet[1382]: E0717 18:34:39.587875    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:34:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:34:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:35:39 ha-445282 kubelet[1382]: E0717 18:35:39.589152    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:35:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:35:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:35:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:35:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:36:39 ha-445282 kubelet[1382]: E0717 18:36:39.588241    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:36:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:36:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:36:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:36:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:37:39 ha-445282 kubelet[1382]: E0717 18:37:39.588020    1382 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:37:39 ha-445282 kubelet[1382]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:37:39 ha-445282 kubelet[1382]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:37:39 ha-445282 kubelet[1382]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:37:39 ha-445282 kubelet[1382]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:38:00.762016  420313 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19282-392903/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-445282 -n ha-445282
helpers_test.go:261: (dbg) Run:  kubectl --context ha-445282 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717026
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-717026
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-717026: exit status 82 (2m1.88029783s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-717026-m03"  ...
	* Stopping node "multinode-717026-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-717026" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717026 --wait=true -v=8 --alsologtostderr
E0717 18:55:05.952128  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:57:13.091492  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:58:08.998163  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717026 --wait=true -v=8 --alsologtostderr: (3m28.090683836s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-717026 -n multinode-717026
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-717026 logs -n 25: (1.467053462s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026:/home/docker/cp-test_multinode-717026-m02_multinode-717026.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026 sudo cat                                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m02_multinode-717026.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03:/home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026-m03 sudo cat                                   | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp testdata/cp-test.txt                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026:/home/docker/cp-test_multinode-717026-m03_multinode-717026.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026 sudo cat                                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m03_multinode-717026.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02:/home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026-m02 sudo cat                                   | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-717026 node stop m03                                                          | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	| node    | multinode-717026 node start                                                             | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-717026                                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:52 UTC |                     |
	| stop    | -p multinode-717026                                                                     | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:52 UTC |                     |
	| start   | -p multinode-717026                                                                     | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:54 UTC | 17 Jul 24 18:58 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-717026                                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:58 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:54:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:54:41.195086  429608 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:54:41.195459  429608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:41.195511  429608 out.go:304] Setting ErrFile to fd 2...
	I0717 18:54:41.195529  429608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:41.195964  429608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:54:41.196957  429608 out.go:298] Setting JSON to false
	I0717 18:54:41.197974  429608 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9424,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:54:41.198035  429608 start.go:139] virtualization: kvm guest
	I0717 18:54:41.199808  429608 out.go:177] * [multinode-717026] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:54:41.201093  429608 notify.go:220] Checking for updates...
	I0717 18:54:41.201128  429608 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:54:41.202396  429608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:54:41.203663  429608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:54:41.204952  429608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:54:41.206148  429608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:54:41.207371  429608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:54:41.209210  429608 config.go:182] Loaded profile config "multinode-717026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:54:41.209349  429608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:54:41.209993  429608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:54:41.210064  429608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:54:41.225971  429608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I0717 18:54:41.226462  429608 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:54:41.227134  429608 main.go:141] libmachine: Using API Version  1
	I0717 18:54:41.227191  429608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:54:41.227580  429608 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:54:41.227764  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.263299  429608 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:54:41.264863  429608 start.go:297] selected driver: kvm2
	I0717 18:54:41.264891  429608 start.go:901] validating driver "kvm2" against &{Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:54:41.265086  429608 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:54:41.265517  429608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:54:41.265626  429608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:54:41.281291  429608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:54:41.281958  429608 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:54:41.282025  429608 cni.go:84] Creating CNI manager for ""
	I0717 18:54:41.282036  429608 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:54:41.282106  429608 start.go:340] cluster config:
	{Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:54:41.282249  429608 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:54:41.284165  429608 out.go:177] * Starting "multinode-717026" primary control-plane node in "multinode-717026" cluster
	I0717 18:54:41.285551  429608 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:54:41.285589  429608 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:54:41.285603  429608 cache.go:56] Caching tarball of preloaded images
	I0717 18:54:41.285678  429608 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:54:41.285691  429608 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:54:41.285832  429608 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/config.json ...
	I0717 18:54:41.286164  429608 start.go:360] acquireMachinesLock for multinode-717026: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:54:41.286220  429608 start.go:364] duration metric: took 30.873µs to acquireMachinesLock for "multinode-717026"
	I0717 18:54:41.286240  429608 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:54:41.286249  429608 fix.go:54] fixHost starting: 
	I0717 18:54:41.286528  429608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:54:41.286568  429608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:54:41.301500  429608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0717 18:54:41.301971  429608 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:54:41.302475  429608 main.go:141] libmachine: Using API Version  1
	I0717 18:54:41.302498  429608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:54:41.302839  429608 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:54:41.303031  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.303198  429608 main.go:141] libmachine: (multinode-717026) Calling .GetState
	I0717 18:54:41.304878  429608 fix.go:112] recreateIfNeeded on multinode-717026: state=Running err=<nil>
	W0717 18:54:41.304910  429608 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:54:41.310039  429608 out.go:177] * Updating the running kvm2 "multinode-717026" VM ...
	I0717 18:54:41.315325  429608 machine.go:94] provisionDockerMachine start ...
	I0717 18:54:41.315351  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.315574  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.318214  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.318668  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.318694  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.318849  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.319000  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.319181  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.319294  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.319411  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.319618  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.319643  429608 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:54:41.425745  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717026
	
	I0717 18:54:41.425778  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.426042  429608 buildroot.go:166] provisioning hostname "multinode-717026"
	I0717 18:54:41.426069  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.426257  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.429102  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.429526  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.429556  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.429753  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.429924  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.430050  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.430184  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.430340  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.430581  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.430599  429608 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-717026 && echo "multinode-717026" | sudo tee /etc/hostname
	I0717 18:54:41.544322  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717026
	
	I0717 18:54:41.544355  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.547142  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.547504  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.547531  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.547710  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.547887  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.548066  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.548227  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.548397  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.548633  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.548651  429608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-717026' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-717026/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-717026' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:54:41.649650  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:54:41.649692  429608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:54:41.649733  429608 buildroot.go:174] setting up certificates
	I0717 18:54:41.649745  429608 provision.go:84] configureAuth start
	I0717 18:54:41.649763  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.650040  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:54:41.652837  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.653295  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.653339  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.653481  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.655623  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.655936  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.655970  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.656105  429608 provision.go:143] copyHostCerts
	I0717 18:54:41.656143  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:54:41.656181  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:54:41.656194  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:54:41.656264  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:54:41.656360  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:54:41.656391  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:54:41.656402  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:54:41.656447  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:54:41.656562  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:54:41.656595  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:54:41.656601  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:54:41.656640  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:54:41.656780  429608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.multinode-717026 san=[127.0.0.1 192.168.39.122 localhost minikube multinode-717026]
	I0717 18:54:41.891833  429608 provision.go:177] copyRemoteCerts
	I0717 18:54:41.891899  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:54:41.891924  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.894869  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.895205  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.895229  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.895383  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.895596  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.895770  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.895930  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:54:41.975186  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:54:41.975252  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:54:42.003523  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:54:42.003597  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:54:42.033013  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:54:42.033084  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:54:42.062545  429608 provision.go:87] duration metric: took 412.783979ms to configureAuth
	I0717 18:54:42.062580  429608 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:54:42.062837  429608 config.go:182] Loaded profile config "multinode-717026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:54:42.062910  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:42.065562  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:42.065938  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:42.065965  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:42.066142  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:42.066340  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:42.066491  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:42.066628  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:42.066807  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:42.066982  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:42.066998  429608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:56:12.871907  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:56:12.871945  429608 machine.go:97] duration metric: took 1m31.556599945s to provisionDockerMachine
	I0717 18:56:12.871959  429608 start.go:293] postStartSetup for "multinode-717026" (driver="kvm2")
	I0717 18:56:12.871970  429608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:56:12.871989  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:12.872374  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:56:12.872408  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:12.875544  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:12.876063  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:12.876103  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:12.876291  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:12.876532  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:12.876743  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:12.877003  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:12.961014  429608 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:56:12.965279  429608 command_runner.go:130] > NAME=Buildroot
	I0717 18:56:12.965302  429608 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 18:56:12.965306  429608 command_runner.go:130] > ID=buildroot
	I0717 18:56:12.965311  429608 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 18:56:12.965316  429608 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 18:56:12.965365  429608 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:56:12.965381  429608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:56:12.965437  429608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:56:12.965532  429608 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:56:12.965546  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:56:12.965643  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:56:12.974840  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:56:12.998105  429608 start.go:296] duration metric: took 126.131184ms for postStartSetup
	I0717 18:56:12.998157  429608 fix.go:56] duration metric: took 1m31.711907667s for fixHost
	I0717 18:56:12.998182  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.001302  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.001632  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.001660  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.001810  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.002007  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.002195  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.002306  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.002452  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:56:13.002712  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:56:13.002727  429608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:56:13.101507  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721242573.083621926
	
	I0717 18:56:13.101530  429608 fix.go:216] guest clock: 1721242573.083621926
	I0717 18:56:13.101537  429608 fix.go:229] Guest: 2024-07-17 18:56:13.083621926 +0000 UTC Remote: 2024-07-17 18:56:12.998163309 +0000 UTC m=+91.839815646 (delta=85.458617ms)
	I0717 18:56:13.101559  429608 fix.go:200] guest clock delta is within tolerance: 85.458617ms
	I0717 18:56:13.101564  429608 start.go:83] releasing machines lock for "multinode-717026", held for 1m31.815332434s
	I0717 18:56:13.101586  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.101942  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:56:13.104859  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.105242  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.105270  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.105429  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106113  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106329  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106406  429608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:56:13.106455  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.106578  429608 ssh_runner.go:195] Run: cat /version.json
	I0717 18:56:13.106619  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.109217  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109469  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109659  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.109685  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109826  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.109848  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.109895  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.110014  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.110015  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.110218  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.110235  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.110362  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:13.110385  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.110508  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:13.202599  429608 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 18:56:13.202712  429608 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 18:56:13.202914  429608 ssh_runner.go:195] Run: systemctl --version
	I0717 18:56:13.208934  429608 command_runner.go:130] > systemd 252 (252)
	I0717 18:56:13.208967  429608 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 18:56:13.209411  429608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:56:13.370350  429608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:56:13.379398  429608 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 18:56:13.379697  429608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:56:13.379797  429608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:56:13.389328  429608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:56:13.389364  429608 start.go:495] detecting cgroup driver to use...
	I0717 18:56:13.389449  429608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:56:13.405048  429608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:56:13.418396  429608 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:56:13.418465  429608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:56:13.431310  429608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:56:13.444333  429608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:56:13.583891  429608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:56:13.728585  429608 docker.go:233] disabling docker service ...
	I0717 18:56:13.728667  429608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:56:13.745838  429608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:56:13.759508  429608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:56:13.900537  429608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:56:14.039760  429608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:56:14.054034  429608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:56:14.073349  429608 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 18:56:14.073745  429608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:56:14.073817  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.085191  429608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:56:14.085272  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.096192  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.106886  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.124287  429608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:56:14.149220  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.159829  429608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.170826  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.181504  429608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:56:14.190762  429608 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 18:56:14.190825  429608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:56:14.199848  429608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:56:14.335905  429608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:56:21.519053  429608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.183101936s)
	I0717 18:56:21.519100  429608 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:56:21.519157  429608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:56:21.524116  429608 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 18:56:21.524141  429608 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 18:56:21.524147  429608 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I0717 18:56:21.524154  429608 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:56:21.524161  429608 command_runner.go:130] > Access: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524169  429608 command_runner.go:130] > Modify: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524184  429608 command_runner.go:130] > Change: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524195  429608 command_runner.go:130] >  Birth: -
	I0717 18:56:21.524216  429608 start.go:563] Will wait 60s for crictl version
	I0717 18:56:21.524264  429608 ssh_runner.go:195] Run: which crictl
	I0717 18:56:21.528060  429608 command_runner.go:130] > /usr/bin/crictl
	I0717 18:56:21.528117  429608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:56:21.569839  429608 command_runner.go:130] > Version:  0.1.0
	I0717 18:56:21.569871  429608 command_runner.go:130] > RuntimeName:  cri-o
	I0717 18:56:21.569908  429608 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 18:56:21.570089  429608 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 18:56:21.572593  429608 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:56:21.572679  429608 ssh_runner.go:195] Run: crio --version
	I0717 18:56:21.601042  429608 command_runner.go:130] > crio version 1.29.1
	I0717 18:56:21.601062  429608 command_runner.go:130] > Version:        1.29.1
	I0717 18:56:21.601068  429608 command_runner.go:130] > GitCommit:      unknown
	I0717 18:56:21.601072  429608 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:56:21.601076  429608 command_runner.go:130] > GitTreeState:   clean
	I0717 18:56:21.601085  429608 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:56:21.601091  429608 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:56:21.601094  429608 command_runner.go:130] > Compiler:       gc
	I0717 18:56:21.601099  429608 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:56:21.601103  429608 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:56:21.601108  429608 command_runner.go:130] > BuildTags:      
	I0717 18:56:21.601112  429608 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:56:21.601116  429608 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:56:21.601123  429608 command_runner.go:130] >   btrfs_noversion
	I0717 18:56:21.601127  429608 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:56:21.601134  429608 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:56:21.601138  429608 command_runner.go:130] >   seccomp
	I0717 18:56:21.601142  429608 command_runner.go:130] > LDFlags:          unknown
	I0717 18:56:21.601146  429608 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:56:21.601150  429608 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:56:21.601220  429608 ssh_runner.go:195] Run: crio --version
	I0717 18:56:21.629483  429608 command_runner.go:130] > crio version 1.29.1
	I0717 18:56:21.629505  429608 command_runner.go:130] > Version:        1.29.1
	I0717 18:56:21.629525  429608 command_runner.go:130] > GitCommit:      unknown
	I0717 18:56:21.629529  429608 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:56:21.629533  429608 command_runner.go:130] > GitTreeState:   clean
	I0717 18:56:21.629538  429608 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:56:21.629542  429608 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:56:21.629546  429608 command_runner.go:130] > Compiler:       gc
	I0717 18:56:21.629551  429608 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:56:21.629556  429608 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:56:21.629564  429608 command_runner.go:130] > BuildTags:      
	I0717 18:56:21.629568  429608 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:56:21.629577  429608 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:56:21.629584  429608 command_runner.go:130] >   btrfs_noversion
	I0717 18:56:21.629602  429608 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:56:21.629609  429608 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:56:21.629613  429608 command_runner.go:130] >   seccomp
	I0717 18:56:21.629617  429608 command_runner.go:130] > LDFlags:          unknown
	I0717 18:56:21.629620  429608 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:56:21.629624  429608 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:56:21.632607  429608 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:56:21.633941  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:56:21.636693  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:21.637047  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:21.637070  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:21.637262  429608 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:56:21.641569  429608 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 18:56:21.641663  429608 kubeadm.go:883] updating cluster {Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:56:21.641822  429608 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:56:21.641860  429608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:56:21.681076  429608 command_runner.go:130] > {
	I0717 18:56:21.681099  429608 command_runner.go:130] >   "images": [
	I0717 18:56:21.681103  429608 command_runner.go:130] >     {
	I0717 18:56:21.681111  429608 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:56:21.681115  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681121  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:56:21.681125  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681129  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681137  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:56:21.681144  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:56:21.681148  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681153  429608 command_runner.go:130] >       "size": "65908273",
	I0717 18:56:21.681165  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681176  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681187  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681193  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681197  429608 command_runner.go:130] >     },
	I0717 18:56:21.681201  429608 command_runner.go:130] >     {
	I0717 18:56:21.681207  429608 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:56:21.681213  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681218  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:56:21.681224  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681228  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681235  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:56:21.681247  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:56:21.681254  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681258  429608 command_runner.go:130] >       "size": "87165492",
	I0717 18:56:21.681261  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681269  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681275  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681279  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681282  429608 command_runner.go:130] >     },
	I0717 18:56:21.681285  429608 command_runner.go:130] >     {
	I0717 18:56:21.681290  429608 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:56:21.681296  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681302  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:56:21.681307  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681311  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681319  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:56:21.681327  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:56:21.681333  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681338  429608 command_runner.go:130] >       "size": "1363676",
	I0717 18:56:21.681341  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681345  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681349  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681353  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681356  429608 command_runner.go:130] >     },
	I0717 18:56:21.681359  429608 command_runner.go:130] >     {
	I0717 18:56:21.681370  429608 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:56:21.681377  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681382  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:56:21.681388  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681392  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681399  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:56:21.681416  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:56:21.681422  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681426  429608 command_runner.go:130] >       "size": "31470524",
	I0717 18:56:21.681432  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681436  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681440  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681444  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681447  429608 command_runner.go:130] >     },
	I0717 18:56:21.681451  429608 command_runner.go:130] >     {
	I0717 18:56:21.681458  429608 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:56:21.681462  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681467  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:56:21.681473  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681476  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681483  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:56:21.681493  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:56:21.681496  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681500  429608 command_runner.go:130] >       "size": "61245718",
	I0717 18:56:21.681504  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681509  429608 command_runner.go:130] >       "username": "nonroot",
	I0717 18:56:21.681513  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681520  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681524  429608 command_runner.go:130] >     },
	I0717 18:56:21.681527  429608 command_runner.go:130] >     {
	I0717 18:56:21.681533  429608 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:56:21.681537  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681542  429608 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:56:21.681546  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681550  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681557  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:56:21.681570  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:56:21.681573  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681577  429608 command_runner.go:130] >       "size": "150779692",
	I0717 18:56:21.681583  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681596  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681601  429608 command_runner.go:130] >       },
	I0717 18:56:21.681604  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681608  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681612  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681615  429608 command_runner.go:130] >     },
	I0717 18:56:21.681619  429608 command_runner.go:130] >     {
	I0717 18:56:21.681624  429608 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:56:21.681630  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681635  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:56:21.681641  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681645  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681651  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:56:21.681660  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:56:21.681664  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681668  429608 command_runner.go:130] >       "size": "117609954",
	I0717 18:56:21.681671  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681675  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681679  429608 command_runner.go:130] >       },
	I0717 18:56:21.681684  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681688  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681694  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681697  429608 command_runner.go:130] >     },
	I0717 18:56:21.681700  429608 command_runner.go:130] >     {
	I0717 18:56:21.681706  429608 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:56:21.681712  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681717  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:56:21.681722  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681726  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681750  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:56:21.681761  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:56:21.681764  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681772  429608 command_runner.go:130] >       "size": "112194888",
	I0717 18:56:21.681778  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681782  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681785  429608 command_runner.go:130] >       },
	I0717 18:56:21.681789  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681792  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681796  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681799  429608 command_runner.go:130] >     },
	I0717 18:56:21.681802  429608 command_runner.go:130] >     {
	I0717 18:56:21.681807  429608 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:56:21.681811  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681815  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:56:21.681818  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681821  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681830  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:56:21.681837  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:56:21.681840  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681844  429608 command_runner.go:130] >       "size": "85953433",
	I0717 18:56:21.681847  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681851  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681854  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681858  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681861  429608 command_runner.go:130] >     },
	I0717 18:56:21.681864  429608 command_runner.go:130] >     {
	I0717 18:56:21.681870  429608 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:56:21.681873  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681877  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:56:21.681880  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681884  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681891  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:56:21.681897  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:56:21.681901  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681905  429608 command_runner.go:130] >       "size": "63051080",
	I0717 18:56:21.681908  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681912  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681915  429608 command_runner.go:130] >       },
	I0717 18:56:21.681925  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681931  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681934  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681937  429608 command_runner.go:130] >     },
	I0717 18:56:21.681941  429608 command_runner.go:130] >     {
	I0717 18:56:21.681947  429608 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:56:21.681951  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681958  429608 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:56:21.681961  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681967  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681973  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:56:21.681982  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:56:21.681985  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681989  429608 command_runner.go:130] >       "size": "750414",
	I0717 18:56:21.681994  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681998  429608 command_runner.go:130] >         "value": "65535"
	I0717 18:56:21.682003  429608 command_runner.go:130] >       },
	I0717 18:56:21.682007  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.682010  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.682014  429608 command_runner.go:130] >       "pinned": true
	I0717 18:56:21.682018  429608 command_runner.go:130] >     }
	I0717 18:56:21.682021  429608 command_runner.go:130] >   ]
	I0717 18:56:21.682024  429608 command_runner.go:130] > }
	I0717 18:56:21.682676  429608 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:56:21.682697  429608 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:56:21.682758  429608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:56:21.713554  429608 command_runner.go:130] > {
	I0717 18:56:21.713579  429608 command_runner.go:130] >   "images": [
	I0717 18:56:21.713586  429608 command_runner.go:130] >     {
	I0717 18:56:21.713597  429608 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:56:21.713602  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713611  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:56:21.713616  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713622  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713642  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:56:21.713657  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:56:21.713666  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713674  429608 command_runner.go:130] >       "size": "65908273",
	I0717 18:56:21.713683  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713701  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.713713  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.713720  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.713729  429608 command_runner.go:130] >     },
	I0717 18:56:21.713734  429608 command_runner.go:130] >     {
	I0717 18:56:21.713744  429608 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:56:21.713753  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713770  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:56:21.713780  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713789  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713802  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:56:21.713817  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:56:21.713826  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713833  429608 command_runner.go:130] >       "size": "87165492",
	I0717 18:56:21.713842  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713856  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.713866  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.713873  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.713881  429608 command_runner.go:130] >     },
	I0717 18:56:21.713887  429608 command_runner.go:130] >     {
	I0717 18:56:21.713897  429608 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:56:21.713906  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713916  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:56:21.713925  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713932  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713947  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:56:21.713962  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:56:21.713971  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713978  429608 command_runner.go:130] >       "size": "1363676",
	I0717 18:56:21.713987  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713996  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714006  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714015  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714021  429608 command_runner.go:130] >     },
	I0717 18:56:21.714030  429608 command_runner.go:130] >     {
	I0717 18:56:21.714042  429608 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:56:21.714057  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714069  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:56:21.714077  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714084  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714100  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:56:21.714124  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:56:21.714133  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714141  429608 command_runner.go:130] >       "size": "31470524",
	I0717 18:56:21.714158  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.714168  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714178  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714185  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714193  429608 command_runner.go:130] >     },
	I0717 18:56:21.714200  429608 command_runner.go:130] >     {
	I0717 18:56:21.714213  429608 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:56:21.714222  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714231  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:56:21.714238  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714248  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714262  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:56:21.714277  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:56:21.714285  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714293  429608 command_runner.go:130] >       "size": "61245718",
	I0717 18:56:21.714302  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.714311  429608 command_runner.go:130] >       "username": "nonroot",
	I0717 18:56:21.714319  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714328  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714334  429608 command_runner.go:130] >     },
	I0717 18:56:21.714342  429608 command_runner.go:130] >     {
	I0717 18:56:21.714353  429608 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:56:21.714362  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714372  429608 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:56:21.714379  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714387  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714409  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:56:21.714423  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:56:21.714437  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714448  429608 command_runner.go:130] >       "size": "150779692",
	I0717 18:56:21.714456  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714464  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714476  429608 command_runner.go:130] >       },
	I0717 18:56:21.714485  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714493  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714501  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714507  429608 command_runner.go:130] >     },
	I0717 18:56:21.714513  429608 command_runner.go:130] >     {
	I0717 18:56:21.714526  429608 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:56:21.714535  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714546  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:56:21.714554  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714561  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714576  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:56:21.714591  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:56:21.714600  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714607  429608 command_runner.go:130] >       "size": "117609954",
	I0717 18:56:21.714616  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714623  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714630  429608 command_runner.go:130] >       },
	I0717 18:56:21.714643  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714653  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714662  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714670  429608 command_runner.go:130] >     },
	I0717 18:56:21.714676  429608 command_runner.go:130] >     {
	I0717 18:56:21.714687  429608 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:56:21.714696  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714707  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:56:21.714715  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714722  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714757  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:56:21.714802  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:56:21.714817  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714824  429608 command_runner.go:130] >       "size": "112194888",
	I0717 18:56:21.714841  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714851  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714859  429608 command_runner.go:130] >       },
	I0717 18:56:21.714867  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714876  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714884  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714891  429608 command_runner.go:130] >     },
	I0717 18:56:21.714897  429608 command_runner.go:130] >     {
	I0717 18:56:21.714908  429608 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:56:21.714916  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714925  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:56:21.714933  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714940  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714956  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:56:21.714974  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:56:21.714982  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714990  429608 command_runner.go:130] >       "size": "85953433",
	I0717 18:56:21.714998  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.715005  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715014  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715021  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.715029  429608 command_runner.go:130] >     },
	I0717 18:56:21.715035  429608 command_runner.go:130] >     {
	I0717 18:56:21.715046  429608 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:56:21.715055  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.715064  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:56:21.715072  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715080  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.715095  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:56:21.715111  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:56:21.715119  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715127  429608 command_runner.go:130] >       "size": "63051080",
	I0717 18:56:21.715136  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.715142  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.715150  429608 command_runner.go:130] >       },
	I0717 18:56:21.715157  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715174  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715184  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.715192  429608 command_runner.go:130] >     },
	I0717 18:56:21.715198  429608 command_runner.go:130] >     {
	I0717 18:56:21.715209  429608 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:56:21.715218  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.715227  429608 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:56:21.715235  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715241  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.715255  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:56:21.715269  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:56:21.715278  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715285  429608 command_runner.go:130] >       "size": "750414",
	I0717 18:56:21.715294  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.715301  429608 command_runner.go:130] >         "value": "65535"
	I0717 18:56:21.715308  429608 command_runner.go:130] >       },
	I0717 18:56:21.715315  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715323  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715330  429608 command_runner.go:130] >       "pinned": true
	I0717 18:56:21.715338  429608 command_runner.go:130] >     }
	I0717 18:56:21.715344  429608 command_runner.go:130] >   ]
	I0717 18:56:21.715348  429608 command_runner.go:130] > }
	I0717 18:56:21.715507  429608 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:56:21.715522  429608 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:56:21.715532  429608 kubeadm.go:934] updating node { 192.168.39.122 8443 v1.30.2 crio true true} ...
	I0717 18:56:21.715688  429608 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-717026 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:56:21.715777  429608 ssh_runner.go:195] Run: crio config
	I0717 18:56:21.748283  429608 command_runner.go:130] ! time="2024-07-17 18:56:21.730146618Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 18:56:21.754255  429608 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 18:56:21.768352  429608 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 18:56:21.768386  429608 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 18:56:21.768396  429608 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 18:56:21.768401  429608 command_runner.go:130] > #
	I0717 18:56:21.768411  429608 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 18:56:21.768421  429608 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 18:56:21.768431  429608 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 18:56:21.768445  429608 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 18:56:21.768454  429608 command_runner.go:130] > # reload'.
	I0717 18:56:21.768463  429608 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 18:56:21.768473  429608 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 18:56:21.768500  429608 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 18:56:21.768513  429608 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 18:56:21.768521  429608 command_runner.go:130] > [crio]
	I0717 18:56:21.768538  429608 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 18:56:21.768549  429608 command_runner.go:130] > # containers images, in this directory.
	I0717 18:56:21.768557  429608 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 18:56:21.768574  429608 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 18:56:21.768584  429608 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 18:56:21.768596  429608 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 18:56:21.768602  429608 command_runner.go:130] > # imagestore = ""
	I0717 18:56:21.768608  429608 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 18:56:21.768616  429608 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 18:56:21.768622  429608 command_runner.go:130] > storage_driver = "overlay"
	I0717 18:56:21.768628  429608 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 18:56:21.768636  429608 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 18:56:21.768651  429608 command_runner.go:130] > storage_option = [
	I0717 18:56:21.768658  429608 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 18:56:21.768661  429608 command_runner.go:130] > ]
	I0717 18:56:21.768670  429608 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 18:56:21.768676  429608 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 18:56:21.768683  429608 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 18:56:21.768689  429608 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 18:56:21.768697  429608 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 18:56:21.768701  429608 command_runner.go:130] > # always happen on a node reboot
	I0717 18:56:21.768707  429608 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 18:56:21.768718  429608 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 18:56:21.768726  429608 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 18:56:21.768732  429608 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 18:56:21.768738  429608 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 18:56:21.768746  429608 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 18:56:21.768756  429608 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 18:56:21.768762  429608 command_runner.go:130] > # internal_wipe = true
	I0717 18:56:21.768769  429608 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 18:56:21.768776  429608 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 18:56:21.768780  429608 command_runner.go:130] > # internal_repair = false
	I0717 18:56:21.768788  429608 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 18:56:21.768794  429608 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 18:56:21.768801  429608 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 18:56:21.768806  429608 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 18:56:21.768814  429608 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 18:56:21.768817  429608 command_runner.go:130] > [crio.api]
	I0717 18:56:21.768822  429608 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 18:56:21.768826  429608 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 18:56:21.768833  429608 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 18:56:21.768838  429608 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 18:56:21.768846  429608 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 18:56:21.768851  429608 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 18:56:21.768857  429608 command_runner.go:130] > # stream_port = "0"
	I0717 18:56:21.768862  429608 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 18:56:21.768868  429608 command_runner.go:130] > # stream_enable_tls = false
	I0717 18:56:21.768874  429608 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 18:56:21.768889  429608 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 18:56:21.768900  429608 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 18:56:21.768908  429608 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 18:56:21.768913  429608 command_runner.go:130] > # minutes.
	I0717 18:56:21.768917  429608 command_runner.go:130] > # stream_tls_cert = ""
	I0717 18:56:21.768924  429608 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 18:56:21.768932  429608 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 18:56:21.768938  429608 command_runner.go:130] > # stream_tls_key = ""
	I0717 18:56:21.768944  429608 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 18:56:21.768951  429608 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 18:56:21.768977  429608 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 18:56:21.768983  429608 command_runner.go:130] > # stream_tls_ca = ""
	I0717 18:56:21.768991  429608 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:56:21.768997  429608 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 18:56:21.769004  429608 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:56:21.769010  429608 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 18:56:21.769016  429608 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 18:56:21.769023  429608 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 18:56:21.769027  429608 command_runner.go:130] > [crio.runtime]
	I0717 18:56:21.769034  429608 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 18:56:21.769039  429608 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 18:56:21.769045  429608 command_runner.go:130] > # "nofile=1024:2048"
	I0717 18:56:21.769051  429608 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 18:56:21.769057  429608 command_runner.go:130] > # default_ulimits = [
	I0717 18:56:21.769060  429608 command_runner.go:130] > # ]
	I0717 18:56:21.769066  429608 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 18:56:21.769072  429608 command_runner.go:130] > # no_pivot = false
	I0717 18:56:21.769078  429608 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 18:56:21.769086  429608 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 18:56:21.769092  429608 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 18:56:21.769098  429608 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 18:56:21.769104  429608 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 18:56:21.769110  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:56:21.769117  429608 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 18:56:21.769121  429608 command_runner.go:130] > # Cgroup setting for conmon
	I0717 18:56:21.769129  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 18:56:21.769138  429608 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 18:56:21.769146  429608 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 18:56:21.769153  429608 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 18:56:21.769161  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:56:21.769167  429608 command_runner.go:130] > conmon_env = [
	I0717 18:56:21.769172  429608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:56:21.769177  429608 command_runner.go:130] > ]
	I0717 18:56:21.769183  429608 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 18:56:21.769189  429608 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 18:56:21.769194  429608 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 18:56:21.769201  429608 command_runner.go:130] > # default_env = [
	I0717 18:56:21.769206  429608 command_runner.go:130] > # ]
	I0717 18:56:21.769218  429608 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 18:56:21.769232  429608 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 18:56:21.769240  429608 command_runner.go:130] > # selinux = false
	I0717 18:56:21.769258  429608 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 18:56:21.769270  429608 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 18:56:21.769282  429608 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 18:56:21.769290  429608 command_runner.go:130] > # seccomp_profile = ""
	I0717 18:56:21.769299  429608 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 18:56:21.769308  429608 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 18:56:21.769316  429608 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 18:56:21.769321  429608 command_runner.go:130] > # which might increase security.
	I0717 18:56:21.769325  429608 command_runner.go:130] > # This option is currently deprecated,
	I0717 18:56:21.769333  429608 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 18:56:21.769340  429608 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 18:56:21.769346  429608 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 18:56:21.769353  429608 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 18:56:21.769361  429608 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 18:56:21.769367  429608 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 18:56:21.769373  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.769378  429608 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 18:56:21.769385  429608 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 18:56:21.769389  429608 command_runner.go:130] > # the cgroup blockio controller.
	I0717 18:56:21.769395  429608 command_runner.go:130] > # blockio_config_file = ""
	I0717 18:56:21.769401  429608 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 18:56:21.769546  429608 command_runner.go:130] > # blockio parameters.
	I0717 18:56:21.769677  429608 command_runner.go:130] > # blockio_reload = false
	I0717 18:56:21.769694  429608 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 18:56:21.769700  429608 command_runner.go:130] > # irqbalance daemon.
	I0717 18:56:21.770920  429608 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 18:56:21.771221  429608 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 18:56:21.771244  429608 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 18:56:21.771257  429608 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 18:56:21.771267  429608 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 18:56:21.771279  429608 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 18:56:21.771291  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.771298  429608 command_runner.go:130] > # rdt_config_file = ""
	I0717 18:56:21.771310  429608 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 18:56:21.771320  429608 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 18:56:21.771349  429608 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 18:56:21.771359  429608 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 18:56:21.771368  429608 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 18:56:21.771379  429608 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 18:56:21.771385  429608 command_runner.go:130] > # will be added.
	I0717 18:56:21.771393  429608 command_runner.go:130] > # default_capabilities = [
	I0717 18:56:21.771399  429608 command_runner.go:130] > # 	"CHOWN",
	I0717 18:56:21.771405  429608 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 18:56:21.771412  429608 command_runner.go:130] > # 	"FSETID",
	I0717 18:56:21.771418  429608 command_runner.go:130] > # 	"FOWNER",
	I0717 18:56:21.771424  429608 command_runner.go:130] > # 	"SETGID",
	I0717 18:56:21.771431  429608 command_runner.go:130] > # 	"SETUID",
	I0717 18:56:21.771437  429608 command_runner.go:130] > # 	"SETPCAP",
	I0717 18:56:21.771444  429608 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 18:56:21.771450  429608 command_runner.go:130] > # 	"KILL",
	I0717 18:56:21.771455  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771468  429608 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 18:56:21.771482  429608 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 18:56:21.771492  429608 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 18:56:21.771504  429608 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 18:56:21.771516  429608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:56:21.771524  429608 command_runner.go:130] > default_sysctls = [
	I0717 18:56:21.771534  429608 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 18:56:21.771541  429608 command_runner.go:130] > ]
	I0717 18:56:21.771550  429608 command_runner.go:130] > # List of devices on the host that a
	I0717 18:56:21.771563  429608 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 18:56:21.771572  429608 command_runner.go:130] > # allowed_devices = [
	I0717 18:56:21.771577  429608 command_runner.go:130] > # 	"/dev/fuse",
	I0717 18:56:21.771581  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771588  429608 command_runner.go:130] > # List of additional devices. specified as
	I0717 18:56:21.771604  429608 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 18:56:21.771616  429608 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 18:56:21.771638  429608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:56:21.771648  429608 command_runner.go:130] > # additional_devices = [
	I0717 18:56:21.771656  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771666  429608 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 18:56:21.771675  429608 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 18:56:21.771683  429608 command_runner.go:130] > # 	"/etc/cdi",
	I0717 18:56:21.771692  429608 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 18:56:21.771697  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771709  429608 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 18:56:21.771723  429608 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 18:56:21.771731  429608 command_runner.go:130] > # Defaults to false.
	I0717 18:56:21.771740  429608 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 18:56:21.771753  429608 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 18:56:21.771766  429608 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 18:56:21.771775  429608 command_runner.go:130] > # hooks_dir = [
	I0717 18:56:21.771785  429608 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 18:56:21.771792  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771808  429608 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 18:56:21.771838  429608 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 18:56:21.771855  429608 command_runner.go:130] > # its default mounts from the following two files:
	I0717 18:56:21.771863  429608 command_runner.go:130] > #
	I0717 18:56:21.771874  429608 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 18:56:21.771888  429608 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 18:56:21.771900  429608 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 18:56:21.771909  429608 command_runner.go:130] > #
	I0717 18:56:21.771920  429608 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 18:56:21.771933  429608 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 18:56:21.771946  429608 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 18:56:21.771956  429608 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 18:56:21.771962  429608 command_runner.go:130] > #
	I0717 18:56:21.771972  429608 command_runner.go:130] > # default_mounts_file = ""
	I0717 18:56:21.771982  429608 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 18:56:21.771995  429608 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 18:56:21.772005  429608 command_runner.go:130] > pids_limit = 1024
	I0717 18:56:21.772019  429608 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 18:56:21.772032  429608 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 18:56:21.772045  429608 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 18:56:21.772061  429608 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 18:56:21.772071  429608 command_runner.go:130] > # log_size_max = -1
	I0717 18:56:21.772086  429608 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 18:56:21.772101  429608 command_runner.go:130] > # log_to_journald = false
	I0717 18:56:21.772114  429608 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 18:56:21.772136  429608 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 18:56:21.772145  429608 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 18:56:21.772156  429608 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 18:56:21.772168  429608 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 18:56:21.772178  429608 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 18:56:21.772191  429608 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 18:56:21.772199  429608 command_runner.go:130] > # read_only = false
	I0717 18:56:21.772209  429608 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 18:56:21.772222  429608 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 18:56:21.772232  429608 command_runner.go:130] > # live configuration reload.
	I0717 18:56:21.772239  429608 command_runner.go:130] > # log_level = "info"
	I0717 18:56:21.772251  429608 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 18:56:21.772263  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.772272  429608 command_runner.go:130] > # log_filter = ""
	I0717 18:56:21.772283  429608 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 18:56:21.772298  429608 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 18:56:21.772307  429608 command_runner.go:130] > # separated by comma.
	I0717 18:56:21.772322  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772331  429608 command_runner.go:130] > # uid_mappings = ""
	I0717 18:56:21.772341  429608 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 18:56:21.772354  429608 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 18:56:21.772364  429608 command_runner.go:130] > # separated by comma.
	I0717 18:56:21.772380  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772389  429608 command_runner.go:130] > # gid_mappings = ""
	I0717 18:56:21.772402  429608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 18:56:21.772414  429608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:56:21.772427  429608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:56:21.772440  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772451  429608 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 18:56:21.772464  429608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 18:56:21.772477  429608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:56:21.772503  429608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:56:21.772517  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772528  429608 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 18:56:21.772638  429608 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 18:56:21.772657  429608 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 18:56:21.772668  429608 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 18:56:21.772680  429608 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 18:56:21.772691  429608 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 18:56:21.772704  429608 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 18:56:21.772714  429608 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 18:56:21.772723  429608 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 18:56:21.772734  429608 command_runner.go:130] > drop_infra_ctr = false
	I0717 18:56:21.772747  429608 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 18:56:21.772761  429608 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 18:56:21.772777  429608 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 18:56:21.772786  429608 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 18:56:21.772798  429608 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 18:56:21.772816  429608 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 18:56:21.772828  429608 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 18:56:21.772840  429608 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 18:56:21.772860  429608 command_runner.go:130] > # shared_cpuset = ""
	I0717 18:56:21.772873  429608 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 18:56:21.772883  429608 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 18:56:21.772894  429608 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 18:56:21.772909  429608 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 18:56:21.772919  429608 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 18:56:21.772930  429608 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 18:56:21.772944  429608 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 18:56:21.772954  429608 command_runner.go:130] > # enable_criu_support = false
	I0717 18:56:21.772964  429608 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 18:56:21.772977  429608 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 18:56:21.772988  429608 command_runner.go:130] > # enable_pod_events = false
	I0717 18:56:21.773009  429608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:56:21.773022  429608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:56:21.773035  429608 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 18:56:21.773043  429608 command_runner.go:130] > # default_runtime = "runc"
	I0717 18:56:21.773053  429608 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 18:56:21.773069  429608 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 18:56:21.773088  429608 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 18:56:21.773111  429608 command_runner.go:130] > # creation as a file is not desired either.
	I0717 18:56:21.773129  429608 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 18:56:21.773140  429608 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 18:56:21.773151  429608 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 18:56:21.773157  429608 command_runner.go:130] > # ]
	I0717 18:56:21.773168  429608 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 18:56:21.773182  429608 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 18:56:21.773195  429608 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 18:56:21.773206  429608 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 18:56:21.773212  429608 command_runner.go:130] > #
	I0717 18:56:21.773223  429608 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 18:56:21.773235  429608 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 18:56:21.773263  429608 command_runner.go:130] > # runtime_type = "oci"
	I0717 18:56:21.773275  429608 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 18:56:21.773286  429608 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 18:56:21.773295  429608 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 18:56:21.773306  429608 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 18:56:21.773315  429608 command_runner.go:130] > # monitor_env = []
	I0717 18:56:21.773327  429608 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 18:56:21.773337  429608 command_runner.go:130] > # allowed_annotations = []
	I0717 18:56:21.773348  429608 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 18:56:21.773358  429608 command_runner.go:130] > # Where:
	I0717 18:56:21.773367  429608 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 18:56:21.773381  429608 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 18:56:21.773394  429608 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 18:56:21.773405  429608 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 18:56:21.773414  429608 command_runner.go:130] > #   in $PATH.
	I0717 18:56:21.773436  429608 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 18:56:21.773448  429608 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 18:56:21.773461  429608 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 18:56:21.773470  429608 command_runner.go:130] > #   state.
	I0717 18:56:21.773482  429608 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 18:56:21.773496  429608 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 18:56:21.773509  429608 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 18:56:21.773523  429608 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 18:56:21.773536  429608 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 18:56:21.773548  429608 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 18:56:21.773563  429608 command_runner.go:130] > #   The currently recognized values are:
	I0717 18:56:21.773575  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 18:56:21.773587  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 18:56:21.773598  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 18:56:21.773612  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 18:56:21.773628  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 18:56:21.773642  429608 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 18:56:21.773657  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 18:56:21.773671  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 18:56:21.773682  429608 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 18:56:21.773697  429608 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 18:56:21.773708  429608 command_runner.go:130] > #   deprecated option "conmon".
	I0717 18:56:21.773721  429608 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 18:56:21.773733  429608 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 18:56:21.773747  429608 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 18:56:21.773758  429608 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 18:56:21.773772  429608 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 18:56:21.773783  429608 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 18:56:21.773797  429608 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 18:56:21.773817  429608 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 18:56:21.773826  429608 command_runner.go:130] > #
	I0717 18:56:21.773835  429608 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 18:56:21.773843  429608 command_runner.go:130] > #
	I0717 18:56:21.773854  429608 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 18:56:21.773868  429608 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 18:56:21.773877  429608 command_runner.go:130] > #
	I0717 18:56:21.773888  429608 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 18:56:21.773901  429608 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 18:56:21.773910  429608 command_runner.go:130] > #
	I0717 18:56:21.773921  429608 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 18:56:21.773930  429608 command_runner.go:130] > # feature.
	I0717 18:56:21.773936  429608 command_runner.go:130] > #
	I0717 18:56:21.773946  429608 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 18:56:21.773960  429608 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 18:56:21.773975  429608 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 18:56:21.773994  429608 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 18:56:21.774008  429608 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 18:56:21.774016  429608 command_runner.go:130] > #
	I0717 18:56:21.774026  429608 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 18:56:21.774037  429608 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 18:56:21.774045  429608 command_runner.go:130] > #
	I0717 18:56:21.774056  429608 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 18:56:21.774070  429608 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 18:56:21.774080  429608 command_runner.go:130] > #
	I0717 18:56:21.774092  429608 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 18:56:21.774104  429608 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 18:56:21.774112  429608 command_runner.go:130] > # limitation.
	I0717 18:56:21.774124  429608 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 18:56:21.774135  429608 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 18:56:21.774145  429608 command_runner.go:130] > runtime_type = "oci"
	I0717 18:56:21.774154  429608 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 18:56:21.774165  429608 command_runner.go:130] > runtime_config_path = ""
	I0717 18:56:21.774175  429608 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 18:56:21.774185  429608 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 18:56:21.774193  429608 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 18:56:21.774201  429608 command_runner.go:130] > monitor_env = [
	I0717 18:56:21.774213  429608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:56:21.774222  429608 command_runner.go:130] > ]
	I0717 18:56:21.774231  429608 command_runner.go:130] > privileged_without_host_devices = false
	I0717 18:56:21.774259  429608 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 18:56:21.774276  429608 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 18:56:21.774287  429608 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 18:56:21.774301  429608 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 18:56:21.774318  429608 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 18:56:21.774331  429608 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 18:56:21.774348  429608 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 18:56:21.774362  429608 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 18:56:21.774370  429608 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 18:56:21.774380  429608 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 18:56:21.774387  429608 command_runner.go:130] > # Example:
	I0717 18:56:21.774395  429608 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 18:56:21.774404  429608 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 18:56:21.774418  429608 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 18:56:21.774428  429608 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 18:56:21.774435  429608 command_runner.go:130] > # cpuset = 0
	I0717 18:56:21.774443  429608 command_runner.go:130] > # cpushares = "0-1"
	I0717 18:56:21.774449  429608 command_runner.go:130] > # Where:
	I0717 18:56:21.774456  429608 command_runner.go:130] > # The workload name is workload-type.
	I0717 18:56:21.774468  429608 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 18:56:21.774477  429608 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 18:56:21.774487  429608 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 18:56:21.774500  429608 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 18:56:21.774511  429608 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 18:56:21.774519  429608 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 18:56:21.774530  429608 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 18:56:21.774538  429608 command_runner.go:130] > # Default value is set to true
	I0717 18:56:21.774546  429608 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 18:56:21.774556  429608 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 18:56:21.774567  429608 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 18:56:21.774575  429608 command_runner.go:130] > # Default value is set to 'false'
	I0717 18:56:21.774583  429608 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 18:56:21.774594  429608 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 18:56:21.774600  429608 command_runner.go:130] > #
	I0717 18:56:21.774611  429608 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 18:56:21.774627  429608 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 18:56:21.774641  429608 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 18:56:21.774655  429608 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 18:56:21.774668  429608 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 18:56:21.774678  429608 command_runner.go:130] > [crio.image]
	I0717 18:56:21.774688  429608 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 18:56:21.774700  429608 command_runner.go:130] > # default_transport = "docker://"
	I0717 18:56:21.774712  429608 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 18:56:21.774726  429608 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:56:21.774735  429608 command_runner.go:130] > # global_auth_file = ""
	I0717 18:56:21.774746  429608 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 18:56:21.774757  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.774767  429608 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 18:56:21.774781  429608 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 18:56:21.774794  429608 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:56:21.774809  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.774825  429608 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 18:56:21.774838  429608 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 18:56:21.774849  429608 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 18:56:21.774860  429608 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 18:56:21.774873  429608 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 18:56:21.774884  429608 command_runner.go:130] > # pause_command = "/pause"
	I0717 18:56:21.774897  429608 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 18:56:21.774911  429608 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 18:56:21.774922  429608 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 18:56:21.774939  429608 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 18:56:21.774952  429608 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 18:56:21.774965  429608 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 18:56:21.774973  429608 command_runner.go:130] > # pinned_images = [
	I0717 18:56:21.774981  429608 command_runner.go:130] > # ]
	I0717 18:56:21.774994  429608 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 18:56:21.775008  429608 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 18:56:21.775022  429608 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 18:56:21.775033  429608 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 18:56:21.775045  429608 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 18:56:21.775053  429608 command_runner.go:130] > # signature_policy = ""
	I0717 18:56:21.775066  429608 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 18:56:21.775080  429608 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 18:56:21.775093  429608 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 18:56:21.775107  429608 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 18:56:21.775117  429608 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 18:56:21.775129  429608 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 18:56:21.775141  429608 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 18:56:21.775155  429608 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 18:56:21.775165  429608 command_runner.go:130] > # changing them here.
	I0717 18:56:21.775173  429608 command_runner.go:130] > # insecure_registries = [
	I0717 18:56:21.775182  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775193  429608 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 18:56:21.775206  429608 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 18:56:21.775214  429608 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 18:56:21.775225  429608 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 18:56:21.775237  429608 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 18:56:21.775253  429608 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 18:56:21.775263  429608 command_runner.go:130] > # CNI plugins.
	I0717 18:56:21.775270  429608 command_runner.go:130] > [crio.network]
	I0717 18:56:21.775283  429608 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 18:56:21.775293  429608 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 18:56:21.775303  429608 command_runner.go:130] > # cni_default_network = ""
	I0717 18:56:21.775316  429608 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 18:56:21.775327  429608 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 18:56:21.775339  429608 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 18:56:21.775349  429608 command_runner.go:130] > # plugin_dirs = [
	I0717 18:56:21.775358  429608 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 18:56:21.775366  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775376  429608 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 18:56:21.775386  429608 command_runner.go:130] > [crio.metrics]
	I0717 18:56:21.775396  429608 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 18:56:21.775407  429608 command_runner.go:130] > enable_metrics = true
	I0717 18:56:21.775416  429608 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 18:56:21.775427  429608 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 18:56:21.775440  429608 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 18:56:21.775454  429608 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 18:56:21.775466  429608 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 18:56:21.775478  429608 command_runner.go:130] > # metrics_collectors = [
	I0717 18:56:21.775487  429608 command_runner.go:130] > # 	"operations",
	I0717 18:56:21.775498  429608 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 18:56:21.775509  429608 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 18:56:21.775519  429608 command_runner.go:130] > # 	"operations_errors",
	I0717 18:56:21.775527  429608 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 18:56:21.775537  429608 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 18:56:21.775546  429608 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 18:56:21.775557  429608 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 18:56:21.775564  429608 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 18:56:21.775570  429608 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 18:56:21.775577  429608 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 18:56:21.775588  429608 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 18:56:21.775598  429608 command_runner.go:130] > # 	"containers_oom_total",
	I0717 18:56:21.775606  429608 command_runner.go:130] > # 	"containers_oom",
	I0717 18:56:21.775613  429608 command_runner.go:130] > # 	"processes_defunct",
	I0717 18:56:21.775622  429608 command_runner.go:130] > # 	"operations_total",
	I0717 18:56:21.775632  429608 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 18:56:21.775641  429608 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 18:56:21.775652  429608 command_runner.go:130] > # 	"operations_errors_total",
	I0717 18:56:21.775662  429608 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 18:56:21.775672  429608 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 18:56:21.775681  429608 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 18:56:21.775689  429608 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 18:56:21.775704  429608 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 18:56:21.775715  429608 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 18:56:21.775724  429608 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 18:56:21.775735  429608 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 18:56:21.775742  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775756  429608 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 18:56:21.775766  429608 command_runner.go:130] > # metrics_port = 9090
	I0717 18:56:21.775776  429608 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 18:56:21.775785  429608 command_runner.go:130] > # metrics_socket = ""
	I0717 18:56:21.775795  429608 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 18:56:21.775816  429608 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 18:56:21.775830  429608 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 18:56:21.775841  429608 command_runner.go:130] > # certificate on any modification event.
	I0717 18:56:21.775851  429608 command_runner.go:130] > # metrics_cert = ""
	I0717 18:56:21.775861  429608 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 18:56:21.775874  429608 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 18:56:21.775882  429608 command_runner.go:130] > # metrics_key = ""
	I0717 18:56:21.775896  429608 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 18:56:21.775903  429608 command_runner.go:130] > [crio.tracing]
	I0717 18:56:21.775914  429608 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 18:56:21.775925  429608 command_runner.go:130] > # enable_tracing = false
	I0717 18:56:21.775935  429608 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 18:56:21.775946  429608 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 18:56:21.775961  429608 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 18:56:21.775973  429608 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 18:56:21.775983  429608 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 18:56:21.775990  429608 command_runner.go:130] > [crio.nri]
	I0717 18:56:21.775998  429608 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 18:56:21.776008  429608 command_runner.go:130] > # enable_nri = false
	I0717 18:56:21.776015  429608 command_runner.go:130] > # NRI socket to listen on.
	I0717 18:56:21.776024  429608 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 18:56:21.776034  429608 command_runner.go:130] > # NRI plugin directory to use.
	I0717 18:56:21.776045  429608 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 18:56:21.776056  429608 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 18:56:21.776067  429608 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 18:56:21.776080  429608 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 18:56:21.776090  429608 command_runner.go:130] > # nri_disable_connections = false
	I0717 18:56:21.776102  429608 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 18:56:21.776114  429608 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 18:56:21.776124  429608 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 18:56:21.776134  429608 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 18:56:21.776148  429608 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 18:56:21.776156  429608 command_runner.go:130] > [crio.stats]
	I0717 18:56:21.776171  429608 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 18:56:21.776183  429608 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 18:56:21.776192  429608 command_runner.go:130] > # stats_collection_period = 0
	I0717 18:56:21.776316  429608 cni.go:84] Creating CNI manager for ""
	I0717 18:56:21.776330  429608 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:56:21.776342  429608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:56:21.776374  429608 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-717026 NodeName:multinode-717026 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:56:21.776580  429608 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-717026"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:56:21.776662  429608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:56:21.786982  429608 command_runner.go:130] > kubeadm
	I0717 18:56:21.787001  429608 command_runner.go:130] > kubectl
	I0717 18:56:21.787007  429608 command_runner.go:130] > kubelet
	I0717 18:56:21.787034  429608 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:56:21.787086  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:56:21.796632  429608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0717 18:56:21.812922  429608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:56:21.830064  429608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0717 18:56:21.846909  429608 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I0717 18:56:21.850796  429608 command_runner.go:130] > 192.168.39.122	control-plane.minikube.internal
	I0717 18:56:21.850865  429608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:56:21.990987  429608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:56:22.005942  429608 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026 for IP: 192.168.39.122
	I0717 18:56:22.005968  429608 certs.go:194] generating shared ca certs ...
	I0717 18:56:22.005991  429608 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:56:22.006149  429608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:56:22.006186  429608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:56:22.006197  429608 certs.go:256] generating profile certs ...
	I0717 18:56:22.006298  429608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/client.key
	I0717 18:56:22.006371  429608 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key.376728e4
	I0717 18:56:22.006405  429608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key
	I0717 18:56:22.006414  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:56:22.006425  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:56:22.006436  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:56:22.006449  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:56:22.006460  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:56:22.006471  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:56:22.006482  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:56:22.006494  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:56:22.006551  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:56:22.006578  429608 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:56:22.006590  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:56:22.006612  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:56:22.006637  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:56:22.006659  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:56:22.006700  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:56:22.006731  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.006744  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.006755  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.007437  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:56:22.032783  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:56:22.055903  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:56:22.079017  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:56:22.101956  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:56:22.125556  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:56:22.148182  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:56:22.173755  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:56:22.196656  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:56:22.219520  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:56:22.242326  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:56:22.264831  429608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:56:22.281678  429608 ssh_runner.go:195] Run: openssl version
	I0717 18:56:22.287464  429608 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 18:56:22.287670  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:56:22.298914  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303530  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303565  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303632  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.309216  429608 command_runner.go:130] > 51391683
	I0717 18:56:22.309294  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:56:22.318729  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:56:22.329954  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334060  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334201  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334241  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.339767  429608 command_runner.go:130] > 3ec20f2e
	I0717 18:56:22.339824  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:56:22.349009  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:56:22.359813  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.363972  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.364115  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.364153  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.369326  429608 command_runner.go:130] > b5213941
	I0717 18:56:22.369507  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:56:22.378690  429608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:56:22.382978  429608 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:56:22.382999  429608 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 18:56:22.383010  429608 command_runner.go:130] > Device: 253,1	Inode: 533781      Links: 1
	I0717 18:56:22.383019  429608 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:56:22.383028  429608 command_runner.go:130] > Access: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383035  429608 command_runner.go:130] > Modify: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383045  429608 command_runner.go:130] > Change: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383054  429608 command_runner.go:130] >  Birth: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383103  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:56:22.388411  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.388586  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:56:22.393816  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.394035  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:56:22.399218  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.399272  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:56:22.404774  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.404824  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:56:22.409909  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.410056  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:56:22.415014  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.415236  429608 kubeadm.go:392] StartCluster: {Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:56:22.415368  429608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:56:22.415427  429608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:56:22.451380  429608 command_runner.go:130] > 6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d
	I0717 18:56:22.451417  429608 command_runner.go:130] > 60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70
	I0717 18:56:22.451424  429608 command_runner.go:130] > 9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb
	I0717 18:56:22.451430  429608 command_runner.go:130] > 34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1
	I0717 18:56:22.451436  429608 command_runner.go:130] > 2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1
	I0717 18:56:22.451441  429608 command_runner.go:130] > bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685
	I0717 18:56:22.451447  429608 command_runner.go:130] > af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b
	I0717 18:56:22.451454  429608 command_runner.go:130] > 730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556
	I0717 18:56:22.452696  429608 cri.go:89] found id: "6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d"
	I0717 18:56:22.452718  429608 cri.go:89] found id: "60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70"
	I0717 18:56:22.452722  429608 cri.go:89] found id: "9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb"
	I0717 18:56:22.452726  429608 cri.go:89] found id: "34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1"
	I0717 18:56:22.452754  429608 cri.go:89] found id: "2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1"
	I0717 18:56:22.452765  429608 cri.go:89] found id: "bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685"
	I0717 18:56:22.452769  429608 cri.go:89] found id: "af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b"
	I0717 18:56:22.452772  429608 cri.go:89] found id: "730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556"
	I0717 18:56:22.452775  429608 cri.go:89] found id: ""
	I0717 18:56:22.452825  429608 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.902757737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242689902734868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b805cf2-c10f-4407-be79-9f44f95b84d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.903491780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=109a1bd4-388e-4fb0-bdf1-017471e454dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.903675230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=109a1bd4-388e-4fb0-bdf1-017471e454dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.904042021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=109a1bd4-388e-4fb0-bdf1-017471e454dd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.946291493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6428eb94-eb14-4c5a-ba2c-5a61685da0de name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.946423749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6428eb94-eb14-4c5a-ba2c-5a61685da0de name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.947815463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2124e471-77b0-4251-b95f-fd55435b6ca7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.948478523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242689948452965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2124e471-77b0-4251-b95f-fd55435b6ca7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.949178760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3bb3f85-43e2-408c-8723-aac24310bcfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.949251347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3bb3f85-43e2-408c-8723-aac24310bcfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.949730186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3bb3f85-43e2-408c-8723-aac24310bcfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.994597569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aab8ea2f-b2d2-4f11-b42b-f71b3e88aecd name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.994692268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aab8ea2f-b2d2-4f11-b42b-f71b3e88aecd name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.996101987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a257b8f9-90c3-4c63-aa09-7a963eb31f77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.996804504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242689996778475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a257b8f9-90c3-4c63-aa09-7a963eb31f77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.997436957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db62dde3-ed14-4c5f-8bb8-44e4e2dfcd77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.997491346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db62dde3-ed14-4c5f-8bb8-44e4e2dfcd77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:09 multinode-717026 crio[2878]: time="2024-07-17 18:58:09.997838173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db62dde3-ed14-4c5f-8bb8-44e4e2dfcd77 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.039990102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b858443-00b9-4472-b258-a6e9e62b2537 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.040081273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b858443-00b9-4472-b258-a6e9e62b2537 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.041322595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b87eb37-0918-4ca3-835c-2cd0983fe8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.041779770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242690041758162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b87eb37-0918-4ca3-835c-2cd0983fe8b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.042398073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3db2ce6e-ab00-4f8c-aaf5-11b5aa1f76ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.042455896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3db2ce6e-ab00-4f8c-aaf5-11b5aa1f76ea name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:58:10 multinode-717026 crio[2878]: time="2024-07-17 18:58:10.042789470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3db2ce6e-ab00-4f8c-aaf5-11b5aa1f76ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	58d3a21458f22       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   741aa941fd865       busybox-fc5497c4f-5vj5m
	bda6f98afceae       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      About a minute ago   Running             kindnet-cni               1                   a126ee58845bc       kindnet-d2dgx
	1d1a5e8dcee13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   d395cff4b789f       coredns-7db6d8ff4d-7whgn
	3c113fe501241       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   7497f051f9f8b       kube-proxy-bvt54
	d9a35a415102d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   0dfa9abde7a45       storage-provisioner
	cd93f6e85081e       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   8d89739fe5ed1       kube-apiserver-multinode-717026
	e21a506be09da       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   96970cc5d1753       kube-controller-manager-multinode-717026
	ca42fc6a22e16       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   d949e0d743ff0       etcd-multinode-717026
	87cdb9250f024       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   55278faf6c4bd       kube-scheduler-multinode-717026
	4762f51a10e39       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   fef616af7f671       busybox-fc5497c4f-5vj5m
	6f88dfe732d94       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   9aec21353c8fd       coredns-7db6d8ff4d-7whgn
	60d0256aba83f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   963cc6ee01902       storage-provisioner
	9ca075474ac25       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    8 minutes ago        Exited              kindnet-cni               0                   825799d719c60       kindnet-d2dgx
	34b14c23bb1ca       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      8 minutes ago        Exited              kube-proxy                0                   c14ba1bb244db       kube-proxy-bvt54
	2b889bd8bab05       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   8e40ffbd5b407       etcd-multinode-717026
	bee098e6d7719       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago        Exited              kube-scheduler            0                   9bf305a4bce6c       kube-scheduler-multinode-717026
	af6609edbfc9a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago        Exited              kube-apiserver            0                   5499275fa0e06       kube-apiserver-multinode-717026
	730b32413676a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago        Exited              kube-controller-manager   0                   17a40604de1e1       kube-controller-manager-multinode-717026
	
	
	==> coredns [1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37040 - 24592 "HINFO IN 754692025035790143.4204466252186269178. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010521181s
	
	
	==> coredns [6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d] <==
	[INFO] 10.244.1.2:37014 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790441s
	[INFO] 10.244.1.2:41768 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111117s
	[INFO] 10.244.1.2:35966 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090846s
	[INFO] 10.244.1.2:49503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122121s
	[INFO] 10.244.1.2:54852 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077842s
	[INFO] 10.244.1.2:48975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085951s
	[INFO] 10.244.1.2:57497 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011806s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076374s
	[INFO] 10.244.0.3:52646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090822s
	[INFO] 10.244.0.3:51316 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00003596s
	[INFO] 10.244.0.3:33089 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00002643s
	[INFO] 10.244.1.2:54084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141064s
	[INFO] 10.244.1.2:60043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113407s
	[INFO] 10.244.1.2:38353 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010914s
	[INFO] 10.244.1.2:42908 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091006s
	[INFO] 10.244.0.3:42411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100803s
	[INFO] 10.244.0.3:51807 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000165229s
	[INFO] 10.244.0.3:53748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108863s
	[INFO] 10.244.0.3:60213 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018404s
	[INFO] 10.244.1.2:57750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139368s
	[INFO] 10.244.1.2:53333 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108039s
	[INFO] 10.244.1.2:40105 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009985s
	[INFO] 10.244.1.2:58791 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101577s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-717026
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-717026
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=multinode-717026
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_49_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:49:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717026
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:50:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    multinode-717026
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42de24f5e4a34541883387786145f33b
	  System UUID:                42de24f5-e4a3-4541-8833-87786145f33b
	  Boot ID:                    ad568060-4c01-47aa-bd9f-f2b05f22939a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5vj5m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 coredns-7db6d8ff4d-7whgn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m26s
	  kube-system                 etcd-multinode-717026                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m40s
	  kube-system                 kindnet-d2dgx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m26s
	  kube-system                 kube-apiserver-multinode-717026             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-controller-manager-multinode-717026    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-proxy-bvt54                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-scheduler-multinode-717026             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m24s                  kube-proxy       
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m45s (x8 over 8m46s)  kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s (x8 over 8m46s)  kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x7 over 8m46s)  kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m40s                  kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m40s                  kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m40s                  kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m40s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m27s                  node-controller  Node multinode-717026 event: Registered Node multinode-717026 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-717026 status is now: NodeReady
	  Normal  Starting                 106s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)    kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)    kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)    kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                    node-controller  Node multinode-717026 event: Registered Node multinode-717026 in Controller
	
	
	Name:               multinode-717026-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-717026-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=multinode-717026
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_57_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:57:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717026-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:58:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:57:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:57:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:57:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:57:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-717026-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df879ceb92f14b2eab1427dfb15b34d5
	  System UUID:                df879ceb-92f1-4b2e-ab14-27dfb15b34d5
	  Boot ID:                    605bdb59-fa53-4b1a-922e-f86525da8e19
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-p6tvs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-tkhlb              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m41s
	  kube-system                 kube-proxy-dkdzm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m36s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m42s)  kubelet     Node multinode-717026-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m42s)  kubelet     Node multinode-717026-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m42s)  kubelet     Node multinode-717026-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m21s                  kubelet     Node multinode-717026-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-717026-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-717026-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-717026-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-717026-m02 status is now: NodeReady
	
	
	Name:               multinode-717026-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-717026-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=multinode-717026
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_57_48_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:57:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717026-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:58:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:58:07 +0000   Wed, 17 Jul 2024 18:57:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:58:07 +0000   Wed, 17 Jul 2024 18:57:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:58:07 +0000   Wed, 17 Jul 2024 18:57:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:58:07 +0000   Wed, 17 Jul 2024 18:58:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    multinode-717026-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3aa31cb89114524bfc724d251b82a5e
	  System UUID:                d3aa31cb-8911-4524-bfc7-24d251b82a5e
	  Boot ID:                    68ccc3f7-75fe-4a19-a669-edaf1dfdb8ca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7dmgp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m42s
	  kube-system                 kube-proxy-j4x2f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m37s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m48s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m43s)  kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m43s)  kubelet     Node multinode-717026-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m43s)  kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m23s                  kubelet     Node multinode-717026-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet     Node multinode-717026-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m34s                  kubelet     Node multinode-717026-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-717026-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-717026-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-717026-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060637] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.173351] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.135001] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.280757] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.122147] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.195114] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991541] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.074355] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.865726] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.677687] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +5.550507] kauditd_printk_skb: 56 callbacks suppressed
	[Jul17 18:50] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 18:56] systemd-fstab-generator[2795]: Ignoring "noauto" option for root device
	[  +0.140237] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.171312] systemd-fstab-generator[2821]: Ignoring "noauto" option for root device
	[  +0.142008] systemd-fstab-generator[2833]: Ignoring "noauto" option for root device
	[  +0.298480] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +7.654262] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +0.080043] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.079550] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +4.688060] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.710706] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.361242] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[Jul17 18:57] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1] <==
	{"level":"warn","ts":"2024-07-17T18:50:37.651124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.477199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T18:50:37.651851Z","caller":"traceutil/trace.go:171","msg":"trace[1005186273] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:534; }","duration":"329.187669ms","start":"2024-07-17T18:50:37.322595Z","end":"2024-07-17T18:50:37.651783Z","steps":["trace[1005186273] 'range keys from in-memory index tree'  (duration: 328.434131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:50:37.651925Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:50:37.322581Z","time spent":"329.317176ms","remote":"127.0.0.1:43524","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-17T18:51:28.242636Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.180996ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10197434619291488937 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-717026-m03.17e314bdcb646beb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-717026-m03.17e314bdcb646beb\" value_size:646 lease:974062582436712653 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T18:51:28.242943Z","caller":"traceutil/trace.go:171","msg":"trace[942711802] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"240.238382ms","start":"2024-07-17T18:51:28.002679Z","end":"2024-07-17T18:51:28.242917Z","steps":["trace[942711802] 'process raft request'  (duration: 79.237469ms)","trace[942711802] 'compare'  (duration: 160.086267ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:51:28.243139Z","caller":"traceutil/trace.go:171","msg":"trace[43367061] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"191.548772ms","start":"2024-07-17T18:51:28.05158Z","end":"2024-07-17T18:51:28.243128Z","steps":["trace[43367061] 'process raft request'  (duration: 191.272614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:51:29.873951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.020249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T18:51:29.874073Z","caller":"traceutil/trace.go:171","msg":"trace[124694286] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:657; }","duration":"188.177953ms","start":"2024-07-17T18:51:29.685881Z","end":"2024-07-17T18:51:29.874059Z","steps":["trace[124694286] 'count revisions from in-memory index tree'  (duration: 187.928303ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.175744Z","caller":"traceutil/trace.go:171","msg":"trace[1492403570] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"226.233377ms","start":"2024-07-17T18:51:29.949497Z","end":"2024-07-17T18:51:30.17573Z","steps":["trace[1492403570] 'process raft request'  (duration: 225.303587ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.176085Z","caller":"traceutil/trace.go:171","msg":"trace[2013162657] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"105.417756ms","start":"2024-07-17T18:51:30.070653Z","end":"2024-07-17T18:51:30.176071Z","steps":["trace[2013162657] 'process raft request'  (duration: 104.851644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.176214Z","caller":"traceutil/trace.go:171","msg":"trace[994084802] linearizableReadLoop","detail":"{readStateIndex:705; appliedIndex:703; }","duration":"106.945589ms","start":"2024-07-17T18:51:30.069262Z","end":"2024-07-17T18:51:30.176208Z","steps":["trace[994084802] 'read index received'  (duration: 289.857µs)","trace[994084802] 'applied index is now lower than readState.Index'  (duration: 106.654962ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T18:51:30.176399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.123758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-17T18:51:30.176438Z","caller":"traceutil/trace.go:171","msg":"trace[146943148] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:659; }","duration":"107.200457ms","start":"2024-07-17T18:51:30.069231Z","end":"2024-07-17T18:51:30.176431Z","steps":["trace[146943148] 'agreement among raft nodes before linearized reading'  (duration: 107.058806ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:51:30.176552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.233838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-17T18:51:30.176585Z","caller":"traceutil/trace.go:171","msg":"trace[1717280603] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:659; }","duration":"107.27985ms","start":"2024-07-17T18:51:30.0693Z","end":"2024-07-17T18:51:30.17658Z","steps":["trace[1717280603] 'agreement among raft nodes before linearized reading'  (duration: 107.227946ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:54:42.183531Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T18:54:42.183649Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-717026","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	{"level":"warn","ts":"2024-07-17T18:54:42.183739Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.183824Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.249901Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.250012Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:54:42.251601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"227d76f9723f8d84","current-leader-member-id":"227d76f9723f8d84"}
	{"level":"info","ts":"2024-07-17T18:54:42.254262Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:54:42.254444Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:54:42.254471Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-717026","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	
	
	==> etcd [ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557] <==
	{"level":"info","ts":"2024-07-17T18:56:25.446926Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:56:25.446937Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:56:25.446884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 switched to configuration voters=(2485273383114083716)"}
	{"level":"info","ts":"2024-07-17T18:56:25.447048Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4faa3c7cd4b30445","local-member-id":"227d76f9723f8d84","added-peer-id":"227d76f9723f8d84","added-peer-peer-urls":["https://192.168.39.122:2380"]}
	{"level":"info","ts":"2024-07-17T18:56:25.447172Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4faa3c7cd4b30445","local-member-id":"227d76f9723f8d84","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:56:25.447233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:56:25.466695Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:56:25.466984Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"227d76f9723f8d84","initial-advertise-peer-urls":["https://192.168.39.122:2380"],"listen-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:56:25.467031Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:56:25.467122Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:56:25.467145Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:56:26.371405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 received MsgPreVoteResp from 227d76f9723f8d84 at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 received MsgVoteResp from 227d76f9723f8d84 at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 227d76f9723f8d84 elected leader 227d76f9723f8d84 at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.377672Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"227d76f9723f8d84","local-member-attributes":"{Name:multinode-717026 ClientURLs:[https://192.168.39.122:2379]}","request-path":"/0/members/227d76f9723f8d84/attributes","cluster-id":"4faa3c7cd4b30445","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:56:26.377776Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:56:26.392228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.122:2379"}
	{"level":"info","ts":"2024-07-17T18:56:26.395229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:56:26.39545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:56:26.400601Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:56:26.401898Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:58:10 up 9 min,  0 users,  load average: 0.12, 0.16, 0.09
	Linux multinode-717026 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb] <==
	I0717 18:53:59.911806       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:09.912424       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:09.912456       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:09.912627       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:09.912658       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:09.912723       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:09.912754       1 main.go:303] handling current node
	I0717 18:54:19.906923       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:19.906977       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:19.907169       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:19.907228       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:19.907289       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:19.907408       1 main.go:303] handling current node
	I0717 18:54:29.915989       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:29.916102       1 main.go:303] handling current node
	I0717 18:54:29.916129       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:29.916157       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:29.916304       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:29.916325       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:39.911398       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:39.911487       1 main.go:303] handling current node
	I0717 18:54:39.911511       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:39.911518       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:39.911709       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:39.911735       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9] <==
	I0717 18:57:29.907056       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:57:39.907540       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:57:39.907648       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:57:39.907866       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:57:39.907899       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:57:39.907973       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:57:39.908001       1 main.go:303] handling current node
	I0717 18:57:49.906719       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:57:49.906797       1 main.go:303] handling current node
	I0717 18:57:49.906814       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:57:49.906820       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:57:49.907004       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:57:49.907026       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.2.0/24] 
	I0717 18:57:59.906686       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:57:59.906764       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:57:59.906904       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:57:59.906911       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.2.0/24] 
	I0717 18:57:59.906955       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:57:59.906960       1 main.go:303] handling current node
	I0717 18:58:09.908144       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:58:09.908223       1 main.go:303] handling current node
	I0717 18:58:09.908237       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:58:09.908243       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:58:09.908411       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:58:09.908437       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b] <==
	I0717 18:49:44.426901       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:49:44.519971       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 18:50:56.530908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38536: use of closed network connection
	E0717 18:50:56.715085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38558: use of closed network connection
	E0717 18:50:56.897013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38584: use of closed network connection
	E0717 18:50:57.059144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38604: use of closed network connection
	E0717 18:50:57.225832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38614: use of closed network connection
	E0717 18:50:57.502925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38660: use of closed network connection
	E0717 18:50:57.683832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38672: use of closed network connection
	E0717 18:50:57.859957       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38682: use of closed network connection
	E0717 18:50:58.026017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38702: use of closed network connection
	I0717 18:54:42.180091       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0717 18:54:42.210330       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211556       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211639       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211692       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212044       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212513       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212578       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212628       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212684       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.213483       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.213564       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0717 18:54:42.212126       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0717 18:54:42.213847       1 controller.go:84] Shutting down OpenAPI AggregationController
	
	
	==> kube-apiserver [cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f] <==
	I0717 18:56:27.896760       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:56:27.897288       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:56:27.897447       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:56:27.897733       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:56:27.897808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:56:27.905552       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:56:27.907002       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0717 18:56:27.912590       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 18:56:27.923672       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:56:27.923797       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:56:27.923840       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:56:27.923863       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:56:27.923886       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:56:27.931316       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:56:27.932978       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:56:27.933037       1 policy_source.go:224] refreshing policies
	I0717 18:56:27.988287       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:56:28.823735       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:56:30.226986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:56:30.337081       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:56:30.355149       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:56:30.424191       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:56:30.434669       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:56:41.137122       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:56:41.427211       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556] <==
	I0717 18:50:29.078587       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m02\" does not exist"
	I0717 18:50:29.121570       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:50:33.874249       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-717026-m02"
	I0717 18:50:49.311645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:50:51.704696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.102954ms"
	I0717 18:50:51.721801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.88667ms"
	I0717 18:50:51.743086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.233869ms"
	I0717 18:50:51.743289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.66µs"
	I0717 18:50:55.218055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.175587ms"
	I0717 18:50:55.218400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="168.329µs"
	I0717 18:50:55.929261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.015433ms"
	I0717 18:50:55.929823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.725µs"
	I0717 18:51:28.245724       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:51:28.249482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:51:28.285575       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:51:28.898969       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-717026-m03"
	I0717 18:51:47.613833       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:16.211089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:17.280645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:17.280713       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:52:17.299581       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.3.0/24"]
	I0717 18:52:36.522034       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:53:13.952572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m03"
	I0717 18:53:14.011951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.343187ms"
	I0717 18:53:14.012158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.886µs"
	
	
	==> kube-controller-manager [e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726] <==
	I0717 18:56:41.754707       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:56:41.788707       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:57:05.118098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.520517ms"
	I0717 18:57:05.118516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.984µs"
	I0717 18:57:05.132480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.640766ms"
	I0717 18:57:05.133164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.29µs"
	I0717 18:57:09.346141       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m02\" does not exist"
	I0717 18:57:09.357698       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:57:11.120936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.501µs"
	I0717 18:57:11.235808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.174µs"
	I0717 18:57:11.275590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.819µs"
	I0717 18:57:11.289780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.239µs"
	I0717 18:57:11.308806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.975µs"
	I0717 18:57:11.317568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.902µs"
	I0717 18:57:11.319721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.489µs"
	I0717 18:57:28.612573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:28.636280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.986µs"
	I0717 18:57:28.649657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.514µs"
	I0717 18:57:32.825567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.370955ms"
	I0717 18:57:32.826029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.943µs"
	I0717 18:57:46.819924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:47.899618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:47.899764       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:57:47.909144       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:58:07.125709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	
	
	==> kube-proxy [34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1] <==
	I0717 18:49:45.466768       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:49:45.476725       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.122"]
	I0717 18:49:45.521243       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:49:45.521290       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:49:45.521306       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:49:45.528070       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:49:45.528825       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:49:45.528853       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:49:45.529917       1 config.go:192] "Starting service config controller"
	I0717 18:49:45.529952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:49:45.529997       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:49:45.530019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:49:45.535197       1 config.go:319] "Starting node config controller"
	I0717 18:49:45.535224       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:49:45.630583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:49:45.630610       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:49:45.635695       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513] <==
	I0717 18:56:29.210327       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:56:29.236799       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.122"]
	I0717 18:56:29.295518       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:56:29.295609       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:56:29.295642       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:56:29.307078       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:56:29.307725       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:56:29.309512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:56:29.312173       1 config.go:192] "Starting service config controller"
	I0717 18:56:29.318130       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:56:29.315984       1 config.go:319] "Starting node config controller"
	I0717 18:56:29.319565       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:56:29.315464       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:56:29.321424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:56:29.419963       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:56:29.421452       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:56:29.423107       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f] <==
	I0717 18:56:26.320095       1 serving.go:380] Generated self-signed cert in-memory
	W0717 18:56:27.880408       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 18:56:27.880508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:56:27.880548       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 18:56:27.880582       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 18:56:27.904518       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 18:56:27.904607       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:56:27.910061       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 18:56:27.910100       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:56:27.910837       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 18:56:27.910907       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 18:56:28.010448       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685] <==
	E0717 18:49:28.332839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:28.331545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:49:28.333115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:49:28.333202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:49:28.333404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:49:28.333491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:49:28.335402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:49:28.335437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:49:29.149778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:49:29.149834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:49:29.354847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:29.354930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:49:29.367845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:49:29.367900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:49:29.390193       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:49:29.390270       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:49:29.408518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:49:29.408565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:49:29.424130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:29.424234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 18:49:32.319579       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:54:42.186997       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 18:54:42.187160       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0717 18:54:42.190414       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 18:54:42.190474       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 18:56:25 multinode-717026 kubelet[3098]: W0717 18:56:25.236609    3098 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.122:8443: connect: connection refused
	Jul 17 18:56:25 multinode-717026 kubelet[3098]: E0717 18:56:25.236673    3098 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.122:8443: connect: connection refused
	Jul 17 18:56:25 multinode-717026 kubelet[3098]: I0717 18:56:25.791621    3098 kubelet_node_status.go:73] "Attempting to register node" node="multinode-717026"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.041091    3098 kubelet_node_status.go:112] "Node was previously registered" node="multinode-717026"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.041625    3098 kubelet_node_status.go:76] "Successfully registered node" node="multinode-717026"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.043103    3098 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.044025    3098 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.260714    3098 apiserver.go:52] "Watching apiserver"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.264481    3098 topology_manager.go:215] "Topology Admit Handler" podUID="c980f2ac-1e0d-4c68-9f92-168a82001f8a" podNamespace="kube-system" podName="kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.264645    3098 topology_manager.go:215] "Topology Admit Handler" podUID="f28f117d-b29b-41a4-97f9-259912fd66e3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7whgn"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.264709    3098 topology_manager.go:215] "Topology Admit Handler" podUID="1b3f31e4-5ec7-4731-87b0-a4082e52bfbc" podNamespace="kube-system" podName="kube-proxy-bvt54"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.264822    3098 topology_manager.go:215] "Topology Admit Handler" podUID="3d3b9792-edc4-4e05-9403-e13289faba69" podNamespace="kube-system" podName="storage-provisioner"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.266271    3098 topology_manager.go:215] "Topology Admit Handler" podUID="368c0d4d-7a32-4133-a588-6994180de799" podNamespace="default" podName="busybox-fc5497c4f-5vj5m"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.276415    3098 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.278854    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c980f2ac-1e0d-4c68-9f92-168a82001f8a-lib-modules\") pod \"kindnet-d2dgx\" (UID: \"c980f2ac-1e0d-4c68-9f92-168a82001f8a\") " pod="kube-system/kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.278906    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b3f31e4-5ec7-4731-87b0-a4082e52bfbc-xtables-lock\") pod \"kube-proxy-bvt54\" (UID: \"1b3f31e4-5ec7-4731-87b0-a4082e52bfbc\") " pod="kube-system/kube-proxy-bvt54"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.278989    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c980f2ac-1e0d-4c68-9f92-168a82001f8a-cni-cfg\") pod \"kindnet-d2dgx\" (UID: \"c980f2ac-1e0d-4c68-9f92-168a82001f8a\") " pod="kube-system/kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279025    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c980f2ac-1e0d-4c68-9f92-168a82001f8a-xtables-lock\") pod \"kindnet-d2dgx\" (UID: \"c980f2ac-1e0d-4c68-9f92-168a82001f8a\") " pod="kube-system/kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279172    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d3b9792-edc4-4e05-9403-e13289faba69-tmp\") pod \"storage-provisioner\" (UID: \"3d3b9792-edc4-4e05-9403-e13289faba69\") " pod="kube-system/storage-provisioner"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279217    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b3f31e4-5ec7-4731-87b0-a4082e52bfbc-lib-modules\") pod \"kube-proxy-bvt54\" (UID: \"1b3f31e4-5ec7-4731-87b0-a4082e52bfbc\") " pod="kube-system/kube-proxy-bvt54"
	Jul 17 18:57:24 multinode-717026 kubelet[3098]: E0717 18:57:24.386319    3098 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:58:09.627769  430770 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19282-392903/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-717026 -n multinode-717026
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-717026 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 stop
E0717 19:00:05.952047  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717026 stop: exit status 82 (2m0.463001229s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-717026-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-717026 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717026 status: exit status 3 (18.651003017s)

                                                
                                                
-- stdout --
	multinode-717026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717026-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:00:32.888806  431432 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.78:22: connect: no route to host
	E0717 19:00:32.888847  431432 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.78:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-717026 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-717026 -n multinode-717026
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-717026 logs -n 25: (1.469435944s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026:/home/docker/cp-test_multinode-717026-m02_multinode-717026.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026 sudo cat                                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m02_multinode-717026.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03:/home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026-m03 sudo cat                                   | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp testdata/cp-test.txt                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026:/home/docker/cp-test_multinode-717026-m03_multinode-717026.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026 sudo cat                                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m03_multinode-717026.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt                       | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m02:/home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n                                                                 | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | multinode-717026-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-717026 ssh -n multinode-717026-m02 sudo cat                                   | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-717026 node stop m03                                                          | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:51 UTC |
	| node    | multinode-717026 node start                                                             | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:51 UTC | 17 Jul 24 18:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-717026                                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:52 UTC |                     |
	| stop    | -p multinode-717026                                                                     | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:52 UTC |                     |
	| start   | -p multinode-717026                                                                     | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:54 UTC | 17 Jul 24 18:58 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-717026                                                                | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:58 UTC |                     |
	| node    | multinode-717026 node delete                                                            | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:58 UTC | 17 Jul 24 18:58 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-717026 stop                                                                   | multinode-717026 | jenkins | v1.33.1 | 17 Jul 24 18:58 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:54:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:54:41.195086  429608 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:54:41.195459  429608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:41.195511  429608 out.go:304] Setting ErrFile to fd 2...
	I0717 18:54:41.195529  429608 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:41.195964  429608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:54:41.196957  429608 out.go:298] Setting JSON to false
	I0717 18:54:41.197974  429608 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9424,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:54:41.198035  429608 start.go:139] virtualization: kvm guest
	I0717 18:54:41.199808  429608 out.go:177] * [multinode-717026] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:54:41.201093  429608 notify.go:220] Checking for updates...
	I0717 18:54:41.201128  429608 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:54:41.202396  429608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:54:41.203663  429608 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:54:41.204952  429608 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:54:41.206148  429608 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:54:41.207371  429608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:54:41.209210  429608 config.go:182] Loaded profile config "multinode-717026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:54:41.209349  429608 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:54:41.209993  429608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:54:41.210064  429608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:54:41.225971  429608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I0717 18:54:41.226462  429608 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:54:41.227134  429608 main.go:141] libmachine: Using API Version  1
	I0717 18:54:41.227191  429608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:54:41.227580  429608 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:54:41.227764  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.263299  429608 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:54:41.264863  429608 start.go:297] selected driver: kvm2
	I0717 18:54:41.264891  429608 start.go:901] validating driver "kvm2" against &{Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:54:41.265086  429608 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:54:41.265517  429608 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:54:41.265626  429608 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:54:41.281291  429608 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:54:41.281958  429608 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:54:41.282025  429608 cni.go:84] Creating CNI manager for ""
	I0717 18:54:41.282036  429608 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:54:41.282106  429608 start.go:340] cluster config:
	{Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:54:41.282249  429608 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:54:41.284165  429608 out.go:177] * Starting "multinode-717026" primary control-plane node in "multinode-717026" cluster
	I0717 18:54:41.285551  429608 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:54:41.285589  429608 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:54:41.285603  429608 cache.go:56] Caching tarball of preloaded images
	I0717 18:54:41.285678  429608 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:54:41.285691  429608 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:54:41.285832  429608 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/config.json ...
	I0717 18:54:41.286164  429608 start.go:360] acquireMachinesLock for multinode-717026: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:54:41.286220  429608 start.go:364] duration metric: took 30.873µs to acquireMachinesLock for "multinode-717026"
	I0717 18:54:41.286240  429608 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:54:41.286249  429608 fix.go:54] fixHost starting: 
	I0717 18:54:41.286528  429608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:54:41.286568  429608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:54:41.301500  429608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35507
	I0717 18:54:41.301971  429608 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:54:41.302475  429608 main.go:141] libmachine: Using API Version  1
	I0717 18:54:41.302498  429608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:54:41.302839  429608 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:54:41.303031  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.303198  429608 main.go:141] libmachine: (multinode-717026) Calling .GetState
	I0717 18:54:41.304878  429608 fix.go:112] recreateIfNeeded on multinode-717026: state=Running err=<nil>
	W0717 18:54:41.304910  429608 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:54:41.310039  429608 out.go:177] * Updating the running kvm2 "multinode-717026" VM ...
	I0717 18:54:41.315325  429608 machine.go:94] provisionDockerMachine start ...
	I0717 18:54:41.315351  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:54:41.315574  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.318214  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.318668  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.318694  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.318849  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.319000  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.319181  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.319294  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.319411  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.319618  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.319643  429608 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:54:41.425745  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717026
	
	I0717 18:54:41.425778  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.426042  429608 buildroot.go:166] provisioning hostname "multinode-717026"
	I0717 18:54:41.426069  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.426257  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.429102  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.429526  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.429556  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.429753  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.429924  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.430050  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.430184  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.430340  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.430581  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.430599  429608 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-717026 && echo "multinode-717026" | sudo tee /etc/hostname
	I0717 18:54:41.544322  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717026
	
	I0717 18:54:41.544355  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.547142  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.547504  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.547531  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.547710  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.547887  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.548066  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.548227  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.548397  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.548633  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:41.548651  429608 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-717026' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-717026/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-717026' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:54:41.649650  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:54:41.649692  429608 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 18:54:41.649733  429608 buildroot.go:174] setting up certificates
	I0717 18:54:41.649745  429608 provision.go:84] configureAuth start
	I0717 18:54:41.649763  429608 main.go:141] libmachine: (multinode-717026) Calling .GetMachineName
	I0717 18:54:41.650040  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:54:41.652837  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.653295  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.653339  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.653481  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.655623  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.655936  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.655970  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.656105  429608 provision.go:143] copyHostCerts
	I0717 18:54:41.656143  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:54:41.656181  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 18:54:41.656194  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 18:54:41.656264  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 18:54:41.656360  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:54:41.656391  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 18:54:41.656402  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 18:54:41.656447  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 18:54:41.656562  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:54:41.656595  429608 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 18:54:41.656601  429608 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 18:54:41.656640  429608 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 18:54:41.656780  429608 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.multinode-717026 san=[127.0.0.1 192.168.39.122 localhost minikube multinode-717026]
	I0717 18:54:41.891833  429608 provision.go:177] copyRemoteCerts
	I0717 18:54:41.891899  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:54:41.891924  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:41.894869  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.895205  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:41.895229  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:41.895383  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:41.895596  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:41.895770  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:41.895930  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:54:41.975186  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:54:41.975252  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:54:42.003523  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:54:42.003597  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:54:42.033013  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:54:42.033084  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:54:42.062545  429608 provision.go:87] duration metric: took 412.783979ms to configureAuth
	I0717 18:54:42.062580  429608 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:54:42.062837  429608 config.go:182] Loaded profile config "multinode-717026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:54:42.062910  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:54:42.065562  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:42.065938  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:54:42.065965  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:54:42.066142  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:54:42.066340  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:42.066491  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:54:42.066628  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:54:42.066807  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:42.066982  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:54:42.066998  429608 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:56:12.871907  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:56:12.871945  429608 machine.go:97] duration metric: took 1m31.556599945s to provisionDockerMachine
	I0717 18:56:12.871959  429608 start.go:293] postStartSetup for "multinode-717026" (driver="kvm2")
	I0717 18:56:12.871970  429608 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:56:12.871989  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:12.872374  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:56:12.872408  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:12.875544  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:12.876063  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:12.876103  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:12.876291  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:12.876532  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:12.876743  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:12.877003  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:12.961014  429608 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:56:12.965279  429608 command_runner.go:130] > NAME=Buildroot
	I0717 18:56:12.965302  429608 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 18:56:12.965306  429608 command_runner.go:130] > ID=buildroot
	I0717 18:56:12.965311  429608 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 18:56:12.965316  429608 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 18:56:12.965365  429608 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:56:12.965381  429608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 18:56:12.965437  429608 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 18:56:12.965532  429608 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 18:56:12.965546  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /etc/ssl/certs/4001712.pem
	I0717 18:56:12.965643  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:56:12.974840  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:56:12.998105  429608 start.go:296] duration metric: took 126.131184ms for postStartSetup
	I0717 18:56:12.998157  429608 fix.go:56] duration metric: took 1m31.711907667s for fixHost
	I0717 18:56:12.998182  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.001302  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.001632  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.001660  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.001810  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.002007  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.002195  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.002306  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.002452  429608 main.go:141] libmachine: Using SSH client type: native
	I0717 18:56:13.002712  429608 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0717 18:56:13.002727  429608 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:56:13.101507  429608 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721242573.083621926
	
	I0717 18:56:13.101530  429608 fix.go:216] guest clock: 1721242573.083621926
	I0717 18:56:13.101537  429608 fix.go:229] Guest: 2024-07-17 18:56:13.083621926 +0000 UTC Remote: 2024-07-17 18:56:12.998163309 +0000 UTC m=+91.839815646 (delta=85.458617ms)
	I0717 18:56:13.101559  429608 fix.go:200] guest clock delta is within tolerance: 85.458617ms
	I0717 18:56:13.101564  429608 start.go:83] releasing machines lock for "multinode-717026", held for 1m31.815332434s
	I0717 18:56:13.101586  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.101942  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:56:13.104859  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.105242  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.105270  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.105429  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106113  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106329  429608 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:56:13.106406  429608 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:56:13.106455  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.106578  429608 ssh_runner.go:195] Run: cat /version.json
	I0717 18:56:13.106619  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:56:13.109217  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109469  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109659  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.109685  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.109826  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.109848  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:13.109895  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:13.110014  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:56:13.110015  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.110218  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:56:13.110235  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.110362  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:13.110385  429608 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:56:13.110508  429608 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:56:13.202599  429608 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 18:56:13.202712  429608 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 18:56:13.202914  429608 ssh_runner.go:195] Run: systemctl --version
	I0717 18:56:13.208934  429608 command_runner.go:130] > systemd 252 (252)
	I0717 18:56:13.208967  429608 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 18:56:13.209411  429608 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:56:13.370350  429608 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:56:13.379398  429608 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 18:56:13.379697  429608 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:56:13.379797  429608 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:56:13.389328  429608 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:56:13.389364  429608 start.go:495] detecting cgroup driver to use...
	I0717 18:56:13.389449  429608 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:56:13.405048  429608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:56:13.418396  429608 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:56:13.418465  429608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:56:13.431310  429608 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:56:13.444333  429608 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:56:13.583891  429608 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:56:13.728585  429608 docker.go:233] disabling docker service ...
	I0717 18:56:13.728667  429608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:56:13.745838  429608 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:56:13.759508  429608 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:56:13.900537  429608 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:56:14.039760  429608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:56:14.054034  429608 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:56:14.073349  429608 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 18:56:14.073745  429608 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:56:14.073817  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.085191  429608 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:56:14.085272  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.096192  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.106886  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.124287  429608 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:56:14.149220  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.159829  429608 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.170826  429608 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:56:14.181504  429608 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:56:14.190762  429608 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 18:56:14.190825  429608 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:56:14.199848  429608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:56:14.335905  429608 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:56:21.519053  429608 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.183101936s)
	I0717 18:56:21.519100  429608 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:56:21.519157  429608 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:56:21.524116  429608 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 18:56:21.524141  429608 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 18:56:21.524147  429608 command_runner.go:130] > Device: 0,22	Inode: 1327        Links: 1
	I0717 18:56:21.524154  429608 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:56:21.524161  429608 command_runner.go:130] > Access: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524169  429608 command_runner.go:130] > Modify: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524184  429608 command_runner.go:130] > Change: 2024-07-17 18:56:21.397120090 +0000
	I0717 18:56:21.524195  429608 command_runner.go:130] >  Birth: -
	I0717 18:56:21.524216  429608 start.go:563] Will wait 60s for crictl version
	I0717 18:56:21.524264  429608 ssh_runner.go:195] Run: which crictl
	I0717 18:56:21.528060  429608 command_runner.go:130] > /usr/bin/crictl
	I0717 18:56:21.528117  429608 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:56:21.569839  429608 command_runner.go:130] > Version:  0.1.0
	I0717 18:56:21.569871  429608 command_runner.go:130] > RuntimeName:  cri-o
	I0717 18:56:21.569908  429608 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 18:56:21.570089  429608 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 18:56:21.572593  429608 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:56:21.572679  429608 ssh_runner.go:195] Run: crio --version
	I0717 18:56:21.601042  429608 command_runner.go:130] > crio version 1.29.1
	I0717 18:56:21.601062  429608 command_runner.go:130] > Version:        1.29.1
	I0717 18:56:21.601068  429608 command_runner.go:130] > GitCommit:      unknown
	I0717 18:56:21.601072  429608 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:56:21.601076  429608 command_runner.go:130] > GitTreeState:   clean
	I0717 18:56:21.601085  429608 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:56:21.601091  429608 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:56:21.601094  429608 command_runner.go:130] > Compiler:       gc
	I0717 18:56:21.601099  429608 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:56:21.601103  429608 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:56:21.601108  429608 command_runner.go:130] > BuildTags:      
	I0717 18:56:21.601112  429608 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:56:21.601116  429608 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:56:21.601123  429608 command_runner.go:130] >   btrfs_noversion
	I0717 18:56:21.601127  429608 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:56:21.601134  429608 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:56:21.601138  429608 command_runner.go:130] >   seccomp
	I0717 18:56:21.601142  429608 command_runner.go:130] > LDFlags:          unknown
	I0717 18:56:21.601146  429608 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:56:21.601150  429608 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:56:21.601220  429608 ssh_runner.go:195] Run: crio --version
	I0717 18:56:21.629483  429608 command_runner.go:130] > crio version 1.29.1
	I0717 18:56:21.629505  429608 command_runner.go:130] > Version:        1.29.1
	I0717 18:56:21.629525  429608 command_runner.go:130] > GitCommit:      unknown
	I0717 18:56:21.629529  429608 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:56:21.629533  429608 command_runner.go:130] > GitTreeState:   clean
	I0717 18:56:21.629538  429608 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:56:21.629542  429608 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:56:21.629546  429608 command_runner.go:130] > Compiler:       gc
	I0717 18:56:21.629551  429608 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:56:21.629556  429608 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:56:21.629564  429608 command_runner.go:130] > BuildTags:      
	I0717 18:56:21.629568  429608 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:56:21.629577  429608 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:56:21.629584  429608 command_runner.go:130] >   btrfs_noversion
	I0717 18:56:21.629602  429608 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:56:21.629609  429608 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:56:21.629613  429608 command_runner.go:130] >   seccomp
	I0717 18:56:21.629617  429608 command_runner.go:130] > LDFlags:          unknown
	I0717 18:56:21.629620  429608 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:56:21.629624  429608 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:56:21.632607  429608 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:56:21.633941  429608 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:56:21.636693  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:21.637047  429608 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:56:21.637070  429608 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:56:21.637262  429608 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:56:21.641569  429608 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 18:56:21.641663  429608 kubeadm.go:883] updating cluster {Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:56:21.641822  429608 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:56:21.641860  429608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:56:21.681076  429608 command_runner.go:130] > {
	I0717 18:56:21.681099  429608 command_runner.go:130] >   "images": [
	I0717 18:56:21.681103  429608 command_runner.go:130] >     {
	I0717 18:56:21.681111  429608 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:56:21.681115  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681121  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:56:21.681125  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681129  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681137  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:56:21.681144  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:56:21.681148  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681153  429608 command_runner.go:130] >       "size": "65908273",
	I0717 18:56:21.681165  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681176  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681187  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681193  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681197  429608 command_runner.go:130] >     },
	I0717 18:56:21.681201  429608 command_runner.go:130] >     {
	I0717 18:56:21.681207  429608 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:56:21.681213  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681218  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:56:21.681224  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681228  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681235  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:56:21.681247  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:56:21.681254  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681258  429608 command_runner.go:130] >       "size": "87165492",
	I0717 18:56:21.681261  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681269  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681275  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681279  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681282  429608 command_runner.go:130] >     },
	I0717 18:56:21.681285  429608 command_runner.go:130] >     {
	I0717 18:56:21.681290  429608 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:56:21.681296  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681302  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:56:21.681307  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681311  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681319  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:56:21.681327  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:56:21.681333  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681338  429608 command_runner.go:130] >       "size": "1363676",
	I0717 18:56:21.681341  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681345  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681349  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681353  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681356  429608 command_runner.go:130] >     },
	I0717 18:56:21.681359  429608 command_runner.go:130] >     {
	I0717 18:56:21.681370  429608 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:56:21.681377  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681382  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:56:21.681388  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681392  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681399  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:56:21.681416  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:56:21.681422  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681426  429608 command_runner.go:130] >       "size": "31470524",
	I0717 18:56:21.681432  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681436  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681440  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681444  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681447  429608 command_runner.go:130] >     },
	I0717 18:56:21.681451  429608 command_runner.go:130] >     {
	I0717 18:56:21.681458  429608 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:56:21.681462  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681467  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:56:21.681473  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681476  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681483  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:56:21.681493  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:56:21.681496  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681500  429608 command_runner.go:130] >       "size": "61245718",
	I0717 18:56:21.681504  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681509  429608 command_runner.go:130] >       "username": "nonroot",
	I0717 18:56:21.681513  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681520  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681524  429608 command_runner.go:130] >     },
	I0717 18:56:21.681527  429608 command_runner.go:130] >     {
	I0717 18:56:21.681533  429608 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:56:21.681537  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681542  429608 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:56:21.681546  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681550  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681557  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:56:21.681570  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:56:21.681573  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681577  429608 command_runner.go:130] >       "size": "150779692",
	I0717 18:56:21.681583  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681596  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681601  429608 command_runner.go:130] >       },
	I0717 18:56:21.681604  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681608  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681612  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681615  429608 command_runner.go:130] >     },
	I0717 18:56:21.681619  429608 command_runner.go:130] >     {
	I0717 18:56:21.681624  429608 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:56:21.681630  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681635  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:56:21.681641  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681645  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681651  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:56:21.681660  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:56:21.681664  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681668  429608 command_runner.go:130] >       "size": "117609954",
	I0717 18:56:21.681671  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681675  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681679  429608 command_runner.go:130] >       },
	I0717 18:56:21.681684  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681688  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681694  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681697  429608 command_runner.go:130] >     },
	I0717 18:56:21.681700  429608 command_runner.go:130] >     {
	I0717 18:56:21.681706  429608 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:56:21.681712  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681717  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:56:21.681722  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681726  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681750  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:56:21.681761  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:56:21.681764  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681772  429608 command_runner.go:130] >       "size": "112194888",
	I0717 18:56:21.681778  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681782  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681785  429608 command_runner.go:130] >       },
	I0717 18:56:21.681789  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681792  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681796  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681799  429608 command_runner.go:130] >     },
	I0717 18:56:21.681802  429608 command_runner.go:130] >     {
	I0717 18:56:21.681807  429608 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:56:21.681811  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681815  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:56:21.681818  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681821  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681830  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:56:21.681837  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:56:21.681840  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681844  429608 command_runner.go:130] >       "size": "85953433",
	I0717 18:56:21.681847  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.681851  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681854  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681858  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681861  429608 command_runner.go:130] >     },
	I0717 18:56:21.681864  429608 command_runner.go:130] >     {
	I0717 18:56:21.681870  429608 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:56:21.681873  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681877  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:56:21.681880  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681884  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681891  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:56:21.681897  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:56:21.681901  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681905  429608 command_runner.go:130] >       "size": "63051080",
	I0717 18:56:21.681908  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681912  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.681915  429608 command_runner.go:130] >       },
	I0717 18:56:21.681925  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.681931  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.681934  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.681937  429608 command_runner.go:130] >     },
	I0717 18:56:21.681941  429608 command_runner.go:130] >     {
	I0717 18:56:21.681947  429608 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:56:21.681951  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.681958  429608 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:56:21.681961  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681967  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.681973  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:56:21.681982  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:56:21.681985  429608 command_runner.go:130] >       ],
	I0717 18:56:21.681989  429608 command_runner.go:130] >       "size": "750414",
	I0717 18:56:21.681994  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.681998  429608 command_runner.go:130] >         "value": "65535"
	I0717 18:56:21.682003  429608 command_runner.go:130] >       },
	I0717 18:56:21.682007  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.682010  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.682014  429608 command_runner.go:130] >       "pinned": true
	I0717 18:56:21.682018  429608 command_runner.go:130] >     }
	I0717 18:56:21.682021  429608 command_runner.go:130] >   ]
	I0717 18:56:21.682024  429608 command_runner.go:130] > }
	I0717 18:56:21.682676  429608 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:56:21.682697  429608 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:56:21.682758  429608 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:56:21.713554  429608 command_runner.go:130] > {
	I0717 18:56:21.713579  429608 command_runner.go:130] >   "images": [
	I0717 18:56:21.713586  429608 command_runner.go:130] >     {
	I0717 18:56:21.713597  429608 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:56:21.713602  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713611  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:56:21.713616  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713622  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713642  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:56:21.713657  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:56:21.713666  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713674  429608 command_runner.go:130] >       "size": "65908273",
	I0717 18:56:21.713683  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713701  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.713713  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.713720  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.713729  429608 command_runner.go:130] >     },
	I0717 18:56:21.713734  429608 command_runner.go:130] >     {
	I0717 18:56:21.713744  429608 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:56:21.713753  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713770  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:56:21.713780  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713789  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713802  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:56:21.713817  429608 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:56:21.713826  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713833  429608 command_runner.go:130] >       "size": "87165492",
	I0717 18:56:21.713842  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713856  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.713866  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.713873  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.713881  429608 command_runner.go:130] >     },
	I0717 18:56:21.713887  429608 command_runner.go:130] >     {
	I0717 18:56:21.713897  429608 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:56:21.713906  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.713916  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:56:21.713925  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713932  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.713947  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:56:21.713962  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:56:21.713971  429608 command_runner.go:130] >       ],
	I0717 18:56:21.713978  429608 command_runner.go:130] >       "size": "1363676",
	I0717 18:56:21.713987  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.713996  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714006  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714015  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714021  429608 command_runner.go:130] >     },
	I0717 18:56:21.714030  429608 command_runner.go:130] >     {
	I0717 18:56:21.714042  429608 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:56:21.714057  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714069  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:56:21.714077  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714084  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714100  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:56:21.714124  429608 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:56:21.714133  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714141  429608 command_runner.go:130] >       "size": "31470524",
	I0717 18:56:21.714158  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.714168  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714178  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714185  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714193  429608 command_runner.go:130] >     },
	I0717 18:56:21.714200  429608 command_runner.go:130] >     {
	I0717 18:56:21.714213  429608 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:56:21.714222  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714231  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:56:21.714238  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714248  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714262  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:56:21.714277  429608 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:56:21.714285  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714293  429608 command_runner.go:130] >       "size": "61245718",
	I0717 18:56:21.714302  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.714311  429608 command_runner.go:130] >       "username": "nonroot",
	I0717 18:56:21.714319  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714328  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714334  429608 command_runner.go:130] >     },
	I0717 18:56:21.714342  429608 command_runner.go:130] >     {
	I0717 18:56:21.714353  429608 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:56:21.714362  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714372  429608 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:56:21.714379  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714387  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714409  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:56:21.714423  429608 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:56:21.714437  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714448  429608 command_runner.go:130] >       "size": "150779692",
	I0717 18:56:21.714456  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714464  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714476  429608 command_runner.go:130] >       },
	I0717 18:56:21.714485  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714493  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714501  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714507  429608 command_runner.go:130] >     },
	I0717 18:56:21.714513  429608 command_runner.go:130] >     {
	I0717 18:56:21.714526  429608 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:56:21.714535  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714546  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:56:21.714554  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714561  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714576  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:56:21.714591  429608 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:56:21.714600  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714607  429608 command_runner.go:130] >       "size": "117609954",
	I0717 18:56:21.714616  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714623  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714630  429608 command_runner.go:130] >       },
	I0717 18:56:21.714643  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714653  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714662  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714670  429608 command_runner.go:130] >     },
	I0717 18:56:21.714676  429608 command_runner.go:130] >     {
	I0717 18:56:21.714687  429608 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:56:21.714696  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714707  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:56:21.714715  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714722  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714757  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:56:21.714802  429608 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:56:21.714817  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714824  429608 command_runner.go:130] >       "size": "112194888",
	I0717 18:56:21.714841  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.714851  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.714859  429608 command_runner.go:130] >       },
	I0717 18:56:21.714867  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.714876  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.714884  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.714891  429608 command_runner.go:130] >     },
	I0717 18:56:21.714897  429608 command_runner.go:130] >     {
	I0717 18:56:21.714908  429608 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:56:21.714916  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.714925  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:56:21.714933  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714940  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.714956  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:56:21.714974  429608 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:56:21.714982  429608 command_runner.go:130] >       ],
	I0717 18:56:21.714990  429608 command_runner.go:130] >       "size": "85953433",
	I0717 18:56:21.714998  429608 command_runner.go:130] >       "uid": null,
	I0717 18:56:21.715005  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715014  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715021  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.715029  429608 command_runner.go:130] >     },
	I0717 18:56:21.715035  429608 command_runner.go:130] >     {
	I0717 18:56:21.715046  429608 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:56:21.715055  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.715064  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:56:21.715072  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715080  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.715095  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:56:21.715111  429608 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:56:21.715119  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715127  429608 command_runner.go:130] >       "size": "63051080",
	I0717 18:56:21.715136  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.715142  429608 command_runner.go:130] >         "value": "0"
	I0717 18:56:21.715150  429608 command_runner.go:130] >       },
	I0717 18:56:21.715157  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715174  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715184  429608 command_runner.go:130] >       "pinned": false
	I0717 18:56:21.715192  429608 command_runner.go:130] >     },
	I0717 18:56:21.715198  429608 command_runner.go:130] >     {
	I0717 18:56:21.715209  429608 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:56:21.715218  429608 command_runner.go:130] >       "repoTags": [
	I0717 18:56:21.715227  429608 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:56:21.715235  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715241  429608 command_runner.go:130] >       "repoDigests": [
	I0717 18:56:21.715255  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:56:21.715269  429608 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:56:21.715278  429608 command_runner.go:130] >       ],
	I0717 18:56:21.715285  429608 command_runner.go:130] >       "size": "750414",
	I0717 18:56:21.715294  429608 command_runner.go:130] >       "uid": {
	I0717 18:56:21.715301  429608 command_runner.go:130] >         "value": "65535"
	I0717 18:56:21.715308  429608 command_runner.go:130] >       },
	I0717 18:56:21.715315  429608 command_runner.go:130] >       "username": "",
	I0717 18:56:21.715323  429608 command_runner.go:130] >       "spec": null,
	I0717 18:56:21.715330  429608 command_runner.go:130] >       "pinned": true
	I0717 18:56:21.715338  429608 command_runner.go:130] >     }
	I0717 18:56:21.715344  429608 command_runner.go:130] >   ]
	I0717 18:56:21.715348  429608 command_runner.go:130] > }
	I0717 18:56:21.715507  429608 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:56:21.715522  429608 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:56:21.715532  429608 kubeadm.go:934] updating node { 192.168.39.122 8443 v1.30.2 crio true true} ...
	I0717 18:56:21.715688  429608 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-717026 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:56:21.715777  429608 ssh_runner.go:195] Run: crio config
	I0717 18:56:21.748283  429608 command_runner.go:130] ! time="2024-07-17 18:56:21.730146618Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 18:56:21.754255  429608 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 18:56:21.768352  429608 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 18:56:21.768386  429608 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 18:56:21.768396  429608 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 18:56:21.768401  429608 command_runner.go:130] > #
	I0717 18:56:21.768411  429608 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 18:56:21.768421  429608 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 18:56:21.768431  429608 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 18:56:21.768445  429608 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 18:56:21.768454  429608 command_runner.go:130] > # reload'.
	I0717 18:56:21.768463  429608 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 18:56:21.768473  429608 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 18:56:21.768500  429608 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 18:56:21.768513  429608 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 18:56:21.768521  429608 command_runner.go:130] > [crio]
	I0717 18:56:21.768538  429608 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 18:56:21.768549  429608 command_runner.go:130] > # containers images, in this directory.
	I0717 18:56:21.768557  429608 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 18:56:21.768574  429608 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 18:56:21.768584  429608 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 18:56:21.768596  429608 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 18:56:21.768602  429608 command_runner.go:130] > # imagestore = ""
	I0717 18:56:21.768608  429608 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 18:56:21.768616  429608 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 18:56:21.768622  429608 command_runner.go:130] > storage_driver = "overlay"
	I0717 18:56:21.768628  429608 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 18:56:21.768636  429608 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 18:56:21.768651  429608 command_runner.go:130] > storage_option = [
	I0717 18:56:21.768658  429608 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 18:56:21.768661  429608 command_runner.go:130] > ]
	I0717 18:56:21.768670  429608 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 18:56:21.768676  429608 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 18:56:21.768683  429608 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 18:56:21.768689  429608 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 18:56:21.768697  429608 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 18:56:21.768701  429608 command_runner.go:130] > # always happen on a node reboot
	I0717 18:56:21.768707  429608 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 18:56:21.768718  429608 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 18:56:21.768726  429608 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 18:56:21.768732  429608 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 18:56:21.768738  429608 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 18:56:21.768746  429608 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 18:56:21.768756  429608 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 18:56:21.768762  429608 command_runner.go:130] > # internal_wipe = true
	I0717 18:56:21.768769  429608 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 18:56:21.768776  429608 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 18:56:21.768780  429608 command_runner.go:130] > # internal_repair = false
	I0717 18:56:21.768788  429608 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 18:56:21.768794  429608 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 18:56:21.768801  429608 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 18:56:21.768806  429608 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 18:56:21.768814  429608 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 18:56:21.768817  429608 command_runner.go:130] > [crio.api]
	I0717 18:56:21.768822  429608 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 18:56:21.768826  429608 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 18:56:21.768833  429608 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 18:56:21.768838  429608 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 18:56:21.768846  429608 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 18:56:21.768851  429608 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 18:56:21.768857  429608 command_runner.go:130] > # stream_port = "0"
	I0717 18:56:21.768862  429608 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 18:56:21.768868  429608 command_runner.go:130] > # stream_enable_tls = false
	I0717 18:56:21.768874  429608 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 18:56:21.768889  429608 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 18:56:21.768900  429608 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 18:56:21.768908  429608 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 18:56:21.768913  429608 command_runner.go:130] > # minutes.
	I0717 18:56:21.768917  429608 command_runner.go:130] > # stream_tls_cert = ""
	I0717 18:56:21.768924  429608 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 18:56:21.768932  429608 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 18:56:21.768938  429608 command_runner.go:130] > # stream_tls_key = ""
	I0717 18:56:21.768944  429608 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 18:56:21.768951  429608 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 18:56:21.768977  429608 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 18:56:21.768983  429608 command_runner.go:130] > # stream_tls_ca = ""
	I0717 18:56:21.768991  429608 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:56:21.768997  429608 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 18:56:21.769004  429608 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:56:21.769010  429608 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 18:56:21.769016  429608 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 18:56:21.769023  429608 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 18:56:21.769027  429608 command_runner.go:130] > [crio.runtime]
	I0717 18:56:21.769034  429608 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 18:56:21.769039  429608 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 18:56:21.769045  429608 command_runner.go:130] > # "nofile=1024:2048"
	I0717 18:56:21.769051  429608 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 18:56:21.769057  429608 command_runner.go:130] > # default_ulimits = [
	I0717 18:56:21.769060  429608 command_runner.go:130] > # ]
	I0717 18:56:21.769066  429608 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 18:56:21.769072  429608 command_runner.go:130] > # no_pivot = false
	I0717 18:56:21.769078  429608 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 18:56:21.769086  429608 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 18:56:21.769092  429608 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 18:56:21.769098  429608 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 18:56:21.769104  429608 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 18:56:21.769110  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:56:21.769117  429608 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 18:56:21.769121  429608 command_runner.go:130] > # Cgroup setting for conmon
	I0717 18:56:21.769129  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 18:56:21.769138  429608 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 18:56:21.769146  429608 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 18:56:21.769153  429608 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 18:56:21.769161  429608 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:56:21.769167  429608 command_runner.go:130] > conmon_env = [
	I0717 18:56:21.769172  429608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:56:21.769177  429608 command_runner.go:130] > ]
	I0717 18:56:21.769183  429608 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 18:56:21.769189  429608 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 18:56:21.769194  429608 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 18:56:21.769201  429608 command_runner.go:130] > # default_env = [
	I0717 18:56:21.769206  429608 command_runner.go:130] > # ]
	I0717 18:56:21.769218  429608 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 18:56:21.769232  429608 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 18:56:21.769240  429608 command_runner.go:130] > # selinux = false
	I0717 18:56:21.769258  429608 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 18:56:21.769270  429608 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 18:56:21.769282  429608 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 18:56:21.769290  429608 command_runner.go:130] > # seccomp_profile = ""
	I0717 18:56:21.769299  429608 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 18:56:21.769308  429608 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 18:56:21.769316  429608 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 18:56:21.769321  429608 command_runner.go:130] > # which might increase security.
	I0717 18:56:21.769325  429608 command_runner.go:130] > # This option is currently deprecated,
	I0717 18:56:21.769333  429608 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 18:56:21.769340  429608 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 18:56:21.769346  429608 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 18:56:21.769353  429608 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 18:56:21.769361  429608 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 18:56:21.769367  429608 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 18:56:21.769373  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.769378  429608 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 18:56:21.769385  429608 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 18:56:21.769389  429608 command_runner.go:130] > # the cgroup blockio controller.
	I0717 18:56:21.769395  429608 command_runner.go:130] > # blockio_config_file = ""
	I0717 18:56:21.769401  429608 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 18:56:21.769546  429608 command_runner.go:130] > # blockio parameters.
	I0717 18:56:21.769677  429608 command_runner.go:130] > # blockio_reload = false
	I0717 18:56:21.769694  429608 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 18:56:21.769700  429608 command_runner.go:130] > # irqbalance daemon.
	I0717 18:56:21.770920  429608 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 18:56:21.771221  429608 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 18:56:21.771244  429608 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 18:56:21.771257  429608 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 18:56:21.771267  429608 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 18:56:21.771279  429608 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 18:56:21.771291  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.771298  429608 command_runner.go:130] > # rdt_config_file = ""
	I0717 18:56:21.771310  429608 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 18:56:21.771320  429608 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 18:56:21.771349  429608 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 18:56:21.771359  429608 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 18:56:21.771368  429608 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 18:56:21.771379  429608 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 18:56:21.771385  429608 command_runner.go:130] > # will be added.
	I0717 18:56:21.771393  429608 command_runner.go:130] > # default_capabilities = [
	I0717 18:56:21.771399  429608 command_runner.go:130] > # 	"CHOWN",
	I0717 18:56:21.771405  429608 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 18:56:21.771412  429608 command_runner.go:130] > # 	"FSETID",
	I0717 18:56:21.771418  429608 command_runner.go:130] > # 	"FOWNER",
	I0717 18:56:21.771424  429608 command_runner.go:130] > # 	"SETGID",
	I0717 18:56:21.771431  429608 command_runner.go:130] > # 	"SETUID",
	I0717 18:56:21.771437  429608 command_runner.go:130] > # 	"SETPCAP",
	I0717 18:56:21.771444  429608 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 18:56:21.771450  429608 command_runner.go:130] > # 	"KILL",
	I0717 18:56:21.771455  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771468  429608 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 18:56:21.771482  429608 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 18:56:21.771492  429608 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 18:56:21.771504  429608 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 18:56:21.771516  429608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:56:21.771524  429608 command_runner.go:130] > default_sysctls = [
	I0717 18:56:21.771534  429608 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 18:56:21.771541  429608 command_runner.go:130] > ]
	I0717 18:56:21.771550  429608 command_runner.go:130] > # List of devices on the host that a
	I0717 18:56:21.771563  429608 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 18:56:21.771572  429608 command_runner.go:130] > # allowed_devices = [
	I0717 18:56:21.771577  429608 command_runner.go:130] > # 	"/dev/fuse",
	I0717 18:56:21.771581  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771588  429608 command_runner.go:130] > # List of additional devices. specified as
	I0717 18:56:21.771604  429608 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 18:56:21.771616  429608 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 18:56:21.771638  429608 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:56:21.771648  429608 command_runner.go:130] > # additional_devices = [
	I0717 18:56:21.771656  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771666  429608 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 18:56:21.771675  429608 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 18:56:21.771683  429608 command_runner.go:130] > # 	"/etc/cdi",
	I0717 18:56:21.771692  429608 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 18:56:21.771697  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771709  429608 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 18:56:21.771723  429608 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 18:56:21.771731  429608 command_runner.go:130] > # Defaults to false.
	I0717 18:56:21.771740  429608 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 18:56:21.771753  429608 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 18:56:21.771766  429608 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 18:56:21.771775  429608 command_runner.go:130] > # hooks_dir = [
	I0717 18:56:21.771785  429608 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 18:56:21.771792  429608 command_runner.go:130] > # ]
	I0717 18:56:21.771808  429608 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 18:56:21.771838  429608 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 18:56:21.771855  429608 command_runner.go:130] > # its default mounts from the following two files:
	I0717 18:56:21.771863  429608 command_runner.go:130] > #
	I0717 18:56:21.771874  429608 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 18:56:21.771888  429608 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 18:56:21.771900  429608 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 18:56:21.771909  429608 command_runner.go:130] > #
	I0717 18:56:21.771920  429608 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 18:56:21.771933  429608 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 18:56:21.771946  429608 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 18:56:21.771956  429608 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 18:56:21.771962  429608 command_runner.go:130] > #
	I0717 18:56:21.771972  429608 command_runner.go:130] > # default_mounts_file = ""
	I0717 18:56:21.771982  429608 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 18:56:21.771995  429608 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 18:56:21.772005  429608 command_runner.go:130] > pids_limit = 1024
	I0717 18:56:21.772019  429608 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 18:56:21.772032  429608 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 18:56:21.772045  429608 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 18:56:21.772061  429608 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 18:56:21.772071  429608 command_runner.go:130] > # log_size_max = -1
	I0717 18:56:21.772086  429608 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 18:56:21.772101  429608 command_runner.go:130] > # log_to_journald = false
	I0717 18:56:21.772114  429608 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 18:56:21.772136  429608 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 18:56:21.772145  429608 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 18:56:21.772156  429608 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 18:56:21.772168  429608 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 18:56:21.772178  429608 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 18:56:21.772191  429608 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 18:56:21.772199  429608 command_runner.go:130] > # read_only = false
	I0717 18:56:21.772209  429608 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 18:56:21.772222  429608 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 18:56:21.772232  429608 command_runner.go:130] > # live configuration reload.
	I0717 18:56:21.772239  429608 command_runner.go:130] > # log_level = "info"
	I0717 18:56:21.772251  429608 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 18:56:21.772263  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.772272  429608 command_runner.go:130] > # log_filter = ""
	I0717 18:56:21.772283  429608 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 18:56:21.772298  429608 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 18:56:21.772307  429608 command_runner.go:130] > # separated by comma.
	I0717 18:56:21.772322  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772331  429608 command_runner.go:130] > # uid_mappings = ""
	I0717 18:56:21.772341  429608 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 18:56:21.772354  429608 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 18:56:21.772364  429608 command_runner.go:130] > # separated by comma.
	I0717 18:56:21.772380  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772389  429608 command_runner.go:130] > # gid_mappings = ""
	I0717 18:56:21.772402  429608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 18:56:21.772414  429608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:56:21.772427  429608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:56:21.772440  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772451  429608 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 18:56:21.772464  429608 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 18:56:21.772477  429608 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:56:21.772503  429608 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:56:21.772517  429608 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:56:21.772528  429608 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 18:56:21.772638  429608 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 18:56:21.772657  429608 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 18:56:21.772668  429608 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 18:56:21.772680  429608 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 18:56:21.772691  429608 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 18:56:21.772704  429608 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 18:56:21.772714  429608 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 18:56:21.772723  429608 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 18:56:21.772734  429608 command_runner.go:130] > drop_infra_ctr = false
	I0717 18:56:21.772747  429608 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 18:56:21.772761  429608 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 18:56:21.772777  429608 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 18:56:21.772786  429608 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 18:56:21.772798  429608 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 18:56:21.772816  429608 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 18:56:21.772828  429608 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 18:56:21.772840  429608 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 18:56:21.772860  429608 command_runner.go:130] > # shared_cpuset = ""
	I0717 18:56:21.772873  429608 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 18:56:21.772883  429608 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 18:56:21.772894  429608 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 18:56:21.772909  429608 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 18:56:21.772919  429608 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 18:56:21.772930  429608 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 18:56:21.772944  429608 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 18:56:21.772954  429608 command_runner.go:130] > # enable_criu_support = false
	I0717 18:56:21.772964  429608 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 18:56:21.772977  429608 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 18:56:21.772988  429608 command_runner.go:130] > # enable_pod_events = false
	I0717 18:56:21.773009  429608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:56:21.773022  429608 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:56:21.773035  429608 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 18:56:21.773043  429608 command_runner.go:130] > # default_runtime = "runc"
	I0717 18:56:21.773053  429608 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 18:56:21.773069  429608 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 18:56:21.773088  429608 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 18:56:21.773111  429608 command_runner.go:130] > # creation as a file is not desired either.
	I0717 18:56:21.773129  429608 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 18:56:21.773140  429608 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 18:56:21.773151  429608 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 18:56:21.773157  429608 command_runner.go:130] > # ]
	I0717 18:56:21.773168  429608 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 18:56:21.773182  429608 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 18:56:21.773195  429608 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 18:56:21.773206  429608 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 18:56:21.773212  429608 command_runner.go:130] > #
	I0717 18:56:21.773223  429608 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 18:56:21.773235  429608 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 18:56:21.773263  429608 command_runner.go:130] > # runtime_type = "oci"
	I0717 18:56:21.773275  429608 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 18:56:21.773286  429608 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 18:56:21.773295  429608 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 18:56:21.773306  429608 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 18:56:21.773315  429608 command_runner.go:130] > # monitor_env = []
	I0717 18:56:21.773327  429608 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 18:56:21.773337  429608 command_runner.go:130] > # allowed_annotations = []
	I0717 18:56:21.773348  429608 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 18:56:21.773358  429608 command_runner.go:130] > # Where:
	I0717 18:56:21.773367  429608 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 18:56:21.773381  429608 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 18:56:21.773394  429608 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 18:56:21.773405  429608 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 18:56:21.773414  429608 command_runner.go:130] > #   in $PATH.
	I0717 18:56:21.773436  429608 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 18:56:21.773448  429608 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 18:56:21.773461  429608 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 18:56:21.773470  429608 command_runner.go:130] > #   state.
	I0717 18:56:21.773482  429608 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 18:56:21.773496  429608 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 18:56:21.773509  429608 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 18:56:21.773523  429608 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 18:56:21.773536  429608 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 18:56:21.773548  429608 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 18:56:21.773563  429608 command_runner.go:130] > #   The currently recognized values are:
	I0717 18:56:21.773575  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 18:56:21.773587  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 18:56:21.773598  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 18:56:21.773612  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 18:56:21.773628  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 18:56:21.773642  429608 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 18:56:21.773657  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 18:56:21.773671  429608 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 18:56:21.773682  429608 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 18:56:21.773697  429608 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 18:56:21.773708  429608 command_runner.go:130] > #   deprecated option "conmon".
	I0717 18:56:21.773721  429608 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 18:56:21.773733  429608 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 18:56:21.773747  429608 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 18:56:21.773758  429608 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 18:56:21.773772  429608 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 18:56:21.773783  429608 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 18:56:21.773797  429608 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 18:56:21.773817  429608 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 18:56:21.773826  429608 command_runner.go:130] > #
	I0717 18:56:21.773835  429608 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 18:56:21.773843  429608 command_runner.go:130] > #
	I0717 18:56:21.773854  429608 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 18:56:21.773868  429608 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 18:56:21.773877  429608 command_runner.go:130] > #
	I0717 18:56:21.773888  429608 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 18:56:21.773901  429608 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 18:56:21.773910  429608 command_runner.go:130] > #
	I0717 18:56:21.773921  429608 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 18:56:21.773930  429608 command_runner.go:130] > # feature.
	I0717 18:56:21.773936  429608 command_runner.go:130] > #
	I0717 18:56:21.773946  429608 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 18:56:21.773960  429608 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 18:56:21.773975  429608 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 18:56:21.773994  429608 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 18:56:21.774008  429608 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 18:56:21.774016  429608 command_runner.go:130] > #
	I0717 18:56:21.774026  429608 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 18:56:21.774037  429608 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 18:56:21.774045  429608 command_runner.go:130] > #
	I0717 18:56:21.774056  429608 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 18:56:21.774070  429608 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 18:56:21.774080  429608 command_runner.go:130] > #
	I0717 18:56:21.774092  429608 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 18:56:21.774104  429608 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 18:56:21.774112  429608 command_runner.go:130] > # limitation.
	I0717 18:56:21.774124  429608 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 18:56:21.774135  429608 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 18:56:21.774145  429608 command_runner.go:130] > runtime_type = "oci"
	I0717 18:56:21.774154  429608 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 18:56:21.774165  429608 command_runner.go:130] > runtime_config_path = ""
	I0717 18:56:21.774175  429608 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 18:56:21.774185  429608 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 18:56:21.774193  429608 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 18:56:21.774201  429608 command_runner.go:130] > monitor_env = [
	I0717 18:56:21.774213  429608 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:56:21.774222  429608 command_runner.go:130] > ]
	I0717 18:56:21.774231  429608 command_runner.go:130] > privileged_without_host_devices = false
	I0717 18:56:21.774259  429608 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 18:56:21.774276  429608 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 18:56:21.774287  429608 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 18:56:21.774301  429608 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 18:56:21.774318  429608 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 18:56:21.774331  429608 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 18:56:21.774348  429608 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 18:56:21.774362  429608 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 18:56:21.774370  429608 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 18:56:21.774380  429608 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 18:56:21.774387  429608 command_runner.go:130] > # Example:
	I0717 18:56:21.774395  429608 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 18:56:21.774404  429608 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 18:56:21.774418  429608 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 18:56:21.774428  429608 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 18:56:21.774435  429608 command_runner.go:130] > # cpuset = 0
	I0717 18:56:21.774443  429608 command_runner.go:130] > # cpushares = "0-1"
	I0717 18:56:21.774449  429608 command_runner.go:130] > # Where:
	I0717 18:56:21.774456  429608 command_runner.go:130] > # The workload name is workload-type.
	I0717 18:56:21.774468  429608 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 18:56:21.774477  429608 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 18:56:21.774487  429608 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 18:56:21.774500  429608 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 18:56:21.774511  429608 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 18:56:21.774519  429608 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 18:56:21.774530  429608 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 18:56:21.774538  429608 command_runner.go:130] > # Default value is set to true
	I0717 18:56:21.774546  429608 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 18:56:21.774556  429608 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 18:56:21.774567  429608 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 18:56:21.774575  429608 command_runner.go:130] > # Default value is set to 'false'
	I0717 18:56:21.774583  429608 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 18:56:21.774594  429608 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 18:56:21.774600  429608 command_runner.go:130] > #
	I0717 18:56:21.774611  429608 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 18:56:21.774627  429608 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 18:56:21.774641  429608 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 18:56:21.774655  429608 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 18:56:21.774668  429608 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 18:56:21.774678  429608 command_runner.go:130] > [crio.image]
	I0717 18:56:21.774688  429608 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 18:56:21.774700  429608 command_runner.go:130] > # default_transport = "docker://"
	I0717 18:56:21.774712  429608 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 18:56:21.774726  429608 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:56:21.774735  429608 command_runner.go:130] > # global_auth_file = ""
	I0717 18:56:21.774746  429608 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 18:56:21.774757  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.774767  429608 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 18:56:21.774781  429608 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 18:56:21.774794  429608 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:56:21.774809  429608 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:56:21.774825  429608 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 18:56:21.774838  429608 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 18:56:21.774849  429608 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 18:56:21.774860  429608 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 18:56:21.774873  429608 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 18:56:21.774884  429608 command_runner.go:130] > # pause_command = "/pause"
	I0717 18:56:21.774897  429608 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 18:56:21.774911  429608 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 18:56:21.774922  429608 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 18:56:21.774939  429608 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 18:56:21.774952  429608 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 18:56:21.774965  429608 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 18:56:21.774973  429608 command_runner.go:130] > # pinned_images = [
	I0717 18:56:21.774981  429608 command_runner.go:130] > # ]
	I0717 18:56:21.774994  429608 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 18:56:21.775008  429608 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 18:56:21.775022  429608 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 18:56:21.775033  429608 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 18:56:21.775045  429608 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 18:56:21.775053  429608 command_runner.go:130] > # signature_policy = ""
	I0717 18:56:21.775066  429608 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 18:56:21.775080  429608 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 18:56:21.775093  429608 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 18:56:21.775107  429608 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 18:56:21.775117  429608 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 18:56:21.775129  429608 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 18:56:21.775141  429608 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 18:56:21.775155  429608 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 18:56:21.775165  429608 command_runner.go:130] > # changing them here.
	I0717 18:56:21.775173  429608 command_runner.go:130] > # insecure_registries = [
	I0717 18:56:21.775182  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775193  429608 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 18:56:21.775206  429608 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 18:56:21.775214  429608 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 18:56:21.775225  429608 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 18:56:21.775237  429608 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 18:56:21.775253  429608 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 18:56:21.775263  429608 command_runner.go:130] > # CNI plugins.
	I0717 18:56:21.775270  429608 command_runner.go:130] > [crio.network]
	I0717 18:56:21.775283  429608 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 18:56:21.775293  429608 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 18:56:21.775303  429608 command_runner.go:130] > # cni_default_network = ""
	I0717 18:56:21.775316  429608 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 18:56:21.775327  429608 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 18:56:21.775339  429608 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 18:56:21.775349  429608 command_runner.go:130] > # plugin_dirs = [
	I0717 18:56:21.775358  429608 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 18:56:21.775366  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775376  429608 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 18:56:21.775386  429608 command_runner.go:130] > [crio.metrics]
	I0717 18:56:21.775396  429608 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 18:56:21.775407  429608 command_runner.go:130] > enable_metrics = true
	I0717 18:56:21.775416  429608 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 18:56:21.775427  429608 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 18:56:21.775440  429608 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 18:56:21.775454  429608 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 18:56:21.775466  429608 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 18:56:21.775478  429608 command_runner.go:130] > # metrics_collectors = [
	I0717 18:56:21.775487  429608 command_runner.go:130] > # 	"operations",
	I0717 18:56:21.775498  429608 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 18:56:21.775509  429608 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 18:56:21.775519  429608 command_runner.go:130] > # 	"operations_errors",
	I0717 18:56:21.775527  429608 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 18:56:21.775537  429608 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 18:56:21.775546  429608 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 18:56:21.775557  429608 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 18:56:21.775564  429608 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 18:56:21.775570  429608 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 18:56:21.775577  429608 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 18:56:21.775588  429608 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 18:56:21.775598  429608 command_runner.go:130] > # 	"containers_oom_total",
	I0717 18:56:21.775606  429608 command_runner.go:130] > # 	"containers_oom",
	I0717 18:56:21.775613  429608 command_runner.go:130] > # 	"processes_defunct",
	I0717 18:56:21.775622  429608 command_runner.go:130] > # 	"operations_total",
	I0717 18:56:21.775632  429608 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 18:56:21.775641  429608 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 18:56:21.775652  429608 command_runner.go:130] > # 	"operations_errors_total",
	I0717 18:56:21.775662  429608 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 18:56:21.775672  429608 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 18:56:21.775681  429608 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 18:56:21.775689  429608 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 18:56:21.775704  429608 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 18:56:21.775715  429608 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 18:56:21.775724  429608 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 18:56:21.775735  429608 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 18:56:21.775742  429608 command_runner.go:130] > # ]
	I0717 18:56:21.775756  429608 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 18:56:21.775766  429608 command_runner.go:130] > # metrics_port = 9090
	I0717 18:56:21.775776  429608 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 18:56:21.775785  429608 command_runner.go:130] > # metrics_socket = ""
	I0717 18:56:21.775795  429608 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 18:56:21.775816  429608 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 18:56:21.775830  429608 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 18:56:21.775841  429608 command_runner.go:130] > # certificate on any modification event.
	I0717 18:56:21.775851  429608 command_runner.go:130] > # metrics_cert = ""
	I0717 18:56:21.775861  429608 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 18:56:21.775874  429608 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 18:56:21.775882  429608 command_runner.go:130] > # metrics_key = ""
	I0717 18:56:21.775896  429608 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 18:56:21.775903  429608 command_runner.go:130] > [crio.tracing]
	I0717 18:56:21.775914  429608 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 18:56:21.775925  429608 command_runner.go:130] > # enable_tracing = false
	I0717 18:56:21.775935  429608 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 18:56:21.775946  429608 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 18:56:21.775961  429608 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 18:56:21.775973  429608 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 18:56:21.775983  429608 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 18:56:21.775990  429608 command_runner.go:130] > [crio.nri]
	I0717 18:56:21.775998  429608 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 18:56:21.776008  429608 command_runner.go:130] > # enable_nri = false
	I0717 18:56:21.776015  429608 command_runner.go:130] > # NRI socket to listen on.
	I0717 18:56:21.776024  429608 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 18:56:21.776034  429608 command_runner.go:130] > # NRI plugin directory to use.
	I0717 18:56:21.776045  429608 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 18:56:21.776056  429608 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 18:56:21.776067  429608 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 18:56:21.776080  429608 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 18:56:21.776090  429608 command_runner.go:130] > # nri_disable_connections = false
	I0717 18:56:21.776102  429608 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 18:56:21.776114  429608 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 18:56:21.776124  429608 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 18:56:21.776134  429608 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 18:56:21.776148  429608 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 18:56:21.776156  429608 command_runner.go:130] > [crio.stats]
	I0717 18:56:21.776171  429608 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 18:56:21.776183  429608 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 18:56:21.776192  429608 command_runner.go:130] > # stats_collection_period = 0
	I0717 18:56:21.776316  429608 cni.go:84] Creating CNI manager for ""
	I0717 18:56:21.776330  429608 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:56:21.776342  429608 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:56:21.776374  429608 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-717026 NodeName:multinode-717026 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:56:21.776580  429608 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-717026"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:56:21.776662  429608 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:56:21.786982  429608 command_runner.go:130] > kubeadm
	I0717 18:56:21.787001  429608 command_runner.go:130] > kubectl
	I0717 18:56:21.787007  429608 command_runner.go:130] > kubelet
	I0717 18:56:21.787034  429608 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:56:21.787086  429608 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:56:21.796632  429608 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0717 18:56:21.812922  429608 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:56:21.830064  429608 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0717 18:56:21.846909  429608 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I0717 18:56:21.850796  429608 command_runner.go:130] > 192.168.39.122	control-plane.minikube.internal
	I0717 18:56:21.850865  429608 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:56:21.990987  429608 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:56:22.005942  429608 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026 for IP: 192.168.39.122
	I0717 18:56:22.005968  429608 certs.go:194] generating shared ca certs ...
	I0717 18:56:22.005991  429608 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:56:22.006149  429608 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 18:56:22.006186  429608 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 18:56:22.006197  429608 certs.go:256] generating profile certs ...
	I0717 18:56:22.006298  429608 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/client.key
	I0717 18:56:22.006371  429608 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key.376728e4
	I0717 18:56:22.006405  429608 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key
	I0717 18:56:22.006414  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:56:22.006425  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:56:22.006436  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:56:22.006449  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:56:22.006460  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:56:22.006471  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:56:22.006482  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:56:22.006494  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:56:22.006551  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 18:56:22.006578  429608 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 18:56:22.006590  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:56:22.006612  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:56:22.006637  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:56:22.006659  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 18:56:22.006700  429608 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 18:56:22.006731  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem -> /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.006744  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.006755  429608 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.007437  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:56:22.032783  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:56:22.055903  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:56:22.079017  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:56:22.101956  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:56:22.125556  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:56:22.148182  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:56:22.173755  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/multinode-717026/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:56:22.196656  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 18:56:22.219520  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 18:56:22.242326  429608 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:56:22.264831  429608 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:56:22.281678  429608 ssh_runner.go:195] Run: openssl version
	I0717 18:56:22.287464  429608 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 18:56:22.287670  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 18:56:22.298914  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303530  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303565  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.303632  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 18:56:22.309216  429608 command_runner.go:130] > 51391683
	I0717 18:56:22.309294  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 18:56:22.318729  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 18:56:22.329954  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334060  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334201  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.334241  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 18:56:22.339767  429608 command_runner.go:130] > 3ec20f2e
	I0717 18:56:22.339824  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:56:22.349009  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:56:22.359813  429608 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.363972  429608 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.364115  429608 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.364153  429608 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:56:22.369326  429608 command_runner.go:130] > b5213941
	I0717 18:56:22.369507  429608 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:56:22.378690  429608 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:56:22.382978  429608 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:56:22.382999  429608 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 18:56:22.383010  429608 command_runner.go:130] > Device: 253,1	Inode: 533781      Links: 1
	I0717 18:56:22.383019  429608 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:56:22.383028  429608 command_runner.go:130] > Access: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383035  429608 command_runner.go:130] > Modify: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383045  429608 command_runner.go:130] > Change: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383054  429608 command_runner.go:130] >  Birth: 2024-07-17 18:49:22.072509315 +0000
	I0717 18:56:22.383103  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:56:22.388411  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.388586  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:56:22.393816  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.394035  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:56:22.399218  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.399272  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:56:22.404774  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.404824  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:56:22.409909  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.410056  429608 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:56:22.415014  429608 command_runner.go:130] > Certificate will not expire
	I0717 18:56:22.415236  429608 kubeadm.go:392] StartCluster: {Name:multinode-717026 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-717026 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.198 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:56:22.415368  429608 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:56:22.415427  429608 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:56:22.451380  429608 command_runner.go:130] > 6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d
	I0717 18:56:22.451417  429608 command_runner.go:130] > 60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70
	I0717 18:56:22.451424  429608 command_runner.go:130] > 9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb
	I0717 18:56:22.451430  429608 command_runner.go:130] > 34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1
	I0717 18:56:22.451436  429608 command_runner.go:130] > 2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1
	I0717 18:56:22.451441  429608 command_runner.go:130] > bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685
	I0717 18:56:22.451447  429608 command_runner.go:130] > af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b
	I0717 18:56:22.451454  429608 command_runner.go:130] > 730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556
	I0717 18:56:22.452696  429608 cri.go:89] found id: "6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d"
	I0717 18:56:22.452718  429608 cri.go:89] found id: "60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70"
	I0717 18:56:22.452722  429608 cri.go:89] found id: "9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb"
	I0717 18:56:22.452726  429608 cri.go:89] found id: "34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1"
	I0717 18:56:22.452754  429608 cri.go:89] found id: "2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1"
	I0717 18:56:22.452765  429608 cri.go:89] found id: "bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685"
	I0717 18:56:22.452769  429608 cri.go:89] found id: "af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b"
	I0717 18:56:22.452772  429608 cri.go:89] found id: "730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556"
	I0717 18:56:22.452775  429608 cri.go:89] found id: ""
	I0717 18:56:22.452825  429608 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.521579535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242833521550562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52927ab4-767c-426a-bd1c-d23da5ce4893 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.522783214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31273a2d-e0b1-4129-bc21-c8cc386621cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.522847858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31273a2d-e0b1-4129-bc21-c8cc386621cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.523296059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31273a2d-e0b1-4129-bc21-c8cc386621cc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.566648436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e2b5aaa-540d-4004-84a4-aee7c3263d82 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.566743390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e2b5aaa-540d-4004-84a4-aee7c3263d82 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.567892527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=888a306a-3337-4eb1-bebe-57c27418e7a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.568302656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242833568279948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=888a306a-3337-4eb1-bebe-57c27418e7a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.569044141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be747db5-115a-48c7-ab21-ddcdee17f043 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.569106401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be747db5-115a-48c7-ab21-ddcdee17f043 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.569493810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be747db5-115a-48c7-ab21-ddcdee17f043 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.610654747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bfa97cd-104d-45ec-a4e8-29adf64ec9c4 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.610731780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bfa97cd-104d-45ec-a4e8-29adf64ec9c4 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.612234418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77006056-d92d-4ba1-8e88-ddbaaffc1cfe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.612900162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242833612876157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77006056-d92d-4ba1-8e88-ddbaaffc1cfe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.613561791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=107f550c-a63c-4931-803c-ab71b8f5ed5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.613617630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=107f550c-a63c-4931-803c-ab71b8f5ed5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.613947504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=107f550c-a63c-4931-803c-ab71b8f5ed5e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.653951258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d46dd48a-ebf7-4c13-afb2-ee51e978ee25 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.654027127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d46dd48a-ebf7-4c13-afb2-ee51e978ee25 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.654954257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1500b33-8887-4ebf-ba8a-d8672e59c666 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.655410964Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242833655321277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1500b33-8887-4ebf-ba8a-d8672e59c666 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.655821405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb9c411c-d910-4e3c-be38-318c53b2e7db name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.655873462Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb9c411c-d910-4e3c-be38-318c53b2e7db name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:33 multinode-717026 crio[2878]: time="2024-07-17 19:00:33.656197141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:58d3a21458f22673c29a7fee8cc849867a0129dbe38797621a789cd4680508ac,PodSandboxId:741aa941fd865775662354ade1d4f7e9ca5641f9808499285da9192c575e903c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721242622559885852,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9,PodSandboxId:a126ee58845bc8232e62e4cf69be1197c3ab8790e98ca4aea760db7de027abb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721242588963440703,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11,PodSandboxId:d395cff4b789fd35694afbdd894571d0b7ff14a708b64be9edbb206c06176f6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721242588931685681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9a35a415102d2084fc4473777ffdc1b793a3f2e4ef07b1203aa3cecaf5496a8,PodSandboxId:0dfa9abde7a4540f1d65af1281c1d5540b7056e6b13e70a7d326665c5c3507fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721242588831145039,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Ann
otations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513,PodSandboxId:7497f051f9f8b7e4e560a8925f35ec0e482f9c631f899c6ea5cbedbcee12a3f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721242588832271826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kub
ernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f,PodSandboxId:8d89739fe5ed1c16629a5db27aa5882f3378ad1460b454011f3f1fbad088a5cd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721242585040761817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f6fc492,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557,PodSandboxId:d949e0d743ff074f5db04eebec89702a01233c8fb268b18a35dce28ef78e8fad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721242585018724547,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,},Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f,PodSandboxId:55278faf6c4bd2d55f3b65c286c1e0c1aa29da1df363761a4df0f5629c92e839,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721242585005953055,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726,PodSandboxId:96970cc5d1753532077ca2a039263d3a48199f3c7cbaff291e763cdb8416236d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721242585028077131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4762f51a10e39b9d3db6088273ef46e11075e067a5af05563ad02376d2b16032,PodSandboxId:fef616af7f6711e01184f067f29623ec627c5df8e8fa027f944f3677bd393311,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721242254750058813,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5vj5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 368c0d4d-7a32-4133-a588-6994180de799,},Annotations:map[string]string{io.kubernetes.container.hash: 86ab4529,io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d,PodSandboxId:9aec21353c8fdafb61defc1d25c4fd601f358acdc8c092c64f37d363c3e48860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721242200881473163,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7whgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28f117d-b29b-41a4-97f9-259912fd66e3,},Annotations:map[string]string{io.kubernetes.container.hash: 7b4c4aa6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60d0256aba83fe2dadbde3f16a6c991063c384879e6a3dab481e5d5d55793d70,PodSandboxId:963cc6ee019020a061ba421b794380b75066e4c252a8800e5a209e610357a87d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721242200825331078,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 3d3b9792-edc4-4e05-9403-e13289faba69,},Annotations:map[string]string{io.kubernetes.container.hash: f9ab1c85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb,PodSandboxId:825799d719c60274ae5ebae15f1b5e17b332007636d9b8a31281e3c7240ef491,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721242189058609342,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-d2dgx,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: c980f2ac-1e0d-4c68-9f92-168a82001f8a,},Annotations:map[string]string{io.kubernetes.container.hash: 1bb8452,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1,PodSandboxId:c14ba1bb244dbf3198a2f6c5417d22be884b536a3bb6ab2b088baee719375b22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721242185324388231,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvt54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 1b3f31e4-5ec7-4731-87b0-a4082e52bfbc,},Annotations:map[string]string{io.kubernetes.container.hash: dcf4cbec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1,PodSandboxId:8e40ffbd5b407315d2ddb139cd90930dcd4f6165c3392b38258bc942160cbedf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721242165900957349,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e42142eae007ea8e03819ef1a7ee5b3,}
,Annotations:map[string]string{io.kubernetes.container.hash: cca3a628,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685,PodSandboxId:9bf305a4bce6c7eab56b5eeeea14f477514ceb437235368d80c99ee1ebb2fe99,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721242165890770509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18d520ed60a76def6a299a457c51d963,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556,PodSandboxId:17a40604de1e1a5227396cb58bb8fb2ebf142c713e5792e545e4d971e259b715,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721242165802285739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f3f8d7535b02435578d6e4d7b663890,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b,PodSandboxId:5499275fa0e06e77acdf0c6ce1dc3124d935b163fe96e7de82baeb15e863f261,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721242165806136825,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-717026,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d6562aa04bb932d82684c593d9f2c44,},Annotations:map[st
ring]string{io.kubernetes.container.hash: f6fc492,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb9c411c-d910-4e3c-be38-318c53b2e7db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58d3a21458f22       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   741aa941fd865       busybox-fc5497c4f-5vj5m
	bda6f98afceae       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      4 minutes ago       Running             kindnet-cni               1                   a126ee58845bc       kindnet-d2dgx
	1d1a5e8dcee13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   d395cff4b789f       coredns-7db6d8ff4d-7whgn
	3c113fe501241       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago       Running             kube-proxy                1                   7497f051f9f8b       kube-proxy-bvt54
	d9a35a415102d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   0dfa9abde7a45       storage-provisioner
	cd93f6e85081e       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   8d89739fe5ed1       kube-apiserver-multinode-717026
	e21a506be09da       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   96970cc5d1753       kube-controller-manager-multinode-717026
	ca42fc6a22e16       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   d949e0d743ff0       etcd-multinode-717026
	87cdb9250f024       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   55278faf6c4bd       kube-scheduler-multinode-717026
	4762f51a10e39       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   fef616af7f671       busybox-fc5497c4f-5vj5m
	6f88dfe732d94       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   9aec21353c8fd       coredns-7db6d8ff4d-7whgn
	60d0256aba83f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   963cc6ee01902       storage-provisioner
	9ca075474ac25       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    10 minutes ago      Exited              kindnet-cni               0                   825799d719c60       kindnet-d2dgx
	34b14c23bb1ca       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      10 minutes ago      Exited              kube-proxy                0                   c14ba1bb244db       kube-proxy-bvt54
	2b889bd8bab05       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   8e40ffbd5b407       etcd-multinode-717026
	bee098e6d7719       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      11 minutes ago      Exited              kube-scheduler            0                   9bf305a4bce6c       kube-scheduler-multinode-717026
	af6609edbfc9a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      11 minutes ago      Exited              kube-apiserver            0                   5499275fa0e06       kube-apiserver-multinode-717026
	730b32413676a       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      11 minutes ago      Exited              kube-controller-manager   0                   17a40604de1e1       kube-controller-manager-multinode-717026
	
	
	==> coredns [1d1a5e8dcee13e757238d3fe01b25ae84be1c35ba4ef19fefd3e231656aefc11] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37040 - 24592 "HINFO IN 754692025035790143.4204466252186269178. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010521181s
	
	
	==> coredns [6f88dfe732d94434b50d5843b98c9e6e55b922129065f235e2feb2e6f943e18d] <==
	[INFO] 10.244.1.2:37014 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790441s
	[INFO] 10.244.1.2:41768 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111117s
	[INFO] 10.244.1.2:35966 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090846s
	[INFO] 10.244.1.2:49503 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00122121s
	[INFO] 10.244.1.2:54852 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077842s
	[INFO] 10.244.1.2:48975 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085951s
	[INFO] 10.244.1.2:57497 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011806s
	[INFO] 10.244.0.3:46585 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076374s
	[INFO] 10.244.0.3:52646 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090822s
	[INFO] 10.244.0.3:51316 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00003596s
	[INFO] 10.244.0.3:33089 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00002643s
	[INFO] 10.244.1.2:54084 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141064s
	[INFO] 10.244.1.2:60043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113407s
	[INFO] 10.244.1.2:38353 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010914s
	[INFO] 10.244.1.2:42908 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091006s
	[INFO] 10.244.0.3:42411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100803s
	[INFO] 10.244.0.3:51807 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000165229s
	[INFO] 10.244.0.3:53748 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108863s
	[INFO] 10.244.0.3:60213 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018404s
	[INFO] 10.244.1.2:57750 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139368s
	[INFO] 10.244.1.2:53333 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108039s
	[INFO] 10.244.1.2:40105 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009985s
	[INFO] 10.244.1.2:58791 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101577s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-717026
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-717026
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=multinode-717026
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_49_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:49:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717026
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:00:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:56:28 +0000   Wed, 17 Jul 2024 18:50:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    multinode-717026
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42de24f5e4a34541883387786145f33b
	  System UUID:                42de24f5-e4a3-4541-8833-87786145f33b
	  Boot ID:                    ad568060-4c01-47aa-bd9f-f2b05f22939a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5vj5m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 coredns-7db6d8ff4d-7whgn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-717026                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-d2dgx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-717026             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-717026    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-bvt54                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-717026             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-717026 event: Registered Node multinode-717026 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-717026 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-717026 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-717026 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-717026 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                node-controller  Node multinode-717026 event: Registered Node multinode-717026 in Controller
	
	
	Name:               multinode-717026-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-717026-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=multinode-717026
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_57_09_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:57:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717026-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:58:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:58:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:58:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:58:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 18:57:39 +0000   Wed, 17 Jul 2024 18:58:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-717026-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 df879ceb92f14b2eab1427dfb15b34d5
	  System UUID:                df879ceb-92f1-4b2e-ab14-27dfb15b34d5
	  Boot ID:                    605bdb59-fa53-4b1a-922e-f86525da8e19
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-p6tvs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kindnet-tkhlb              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-dkdzm           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-717026-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-717026-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-717026-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m44s                  kubelet          Node multinode-717026-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-717026-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-717026-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-717026-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-717026-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-717026-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060637] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.173351] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.135001] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.280757] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.122147] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.195114] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991541] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.074355] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.865726] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.677687] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +5.550507] kauditd_printk_skb: 56 callbacks suppressed
	[Jul17 18:50] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 18:56] systemd-fstab-generator[2795]: Ignoring "noauto" option for root device
	[  +0.140237] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.171312] systemd-fstab-generator[2821]: Ignoring "noauto" option for root device
	[  +0.142008] systemd-fstab-generator[2833]: Ignoring "noauto" option for root device
	[  +0.298480] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +7.654262] systemd-fstab-generator[2967]: Ignoring "noauto" option for root device
	[  +0.080043] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.079550] systemd-fstab-generator[3091]: Ignoring "noauto" option for root device
	[  +4.688060] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.710706] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.361242] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[Jul17 18:57] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2b889bd8bab05d3c179cd226331a5f1ae9394a0fb433fb4aa0b5d2657c2d99d1] <==
	{"level":"warn","ts":"2024-07-17T18:50:37.651124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"328.477199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T18:50:37.651851Z","caller":"traceutil/trace.go:171","msg":"trace[1005186273] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:534; }","duration":"329.187669ms","start":"2024-07-17T18:50:37.322595Z","end":"2024-07-17T18:50:37.651783Z","steps":["trace[1005186273] 'range keys from in-memory index tree'  (duration: 328.434131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:50:37.651925Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T18:50:37.322581Z","time spent":"329.317176ms","remote":"127.0.0.1:43524","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-17T18:51:28.242636Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.180996ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10197434619291488937 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-717026-m03.17e314bdcb646beb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-717026-m03.17e314bdcb646beb\" value_size:646 lease:974062582436712653 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T18:51:28.242943Z","caller":"traceutil/trace.go:171","msg":"trace[942711802] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"240.238382ms","start":"2024-07-17T18:51:28.002679Z","end":"2024-07-17T18:51:28.242917Z","steps":["trace[942711802] 'process raft request'  (duration: 79.237469ms)","trace[942711802] 'compare'  (duration: 160.086267ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:51:28.243139Z","caller":"traceutil/trace.go:171","msg":"trace[43367061] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"191.548772ms","start":"2024-07-17T18:51:28.05158Z","end":"2024-07-17T18:51:28.243128Z","steps":["trace[43367061] 'process raft request'  (duration: 191.272614ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:51:29.873951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.020249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T18:51:29.874073Z","caller":"traceutil/trace.go:171","msg":"trace[124694286] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:657; }","duration":"188.177953ms","start":"2024-07-17T18:51:29.685881Z","end":"2024-07-17T18:51:29.874059Z","steps":["trace[124694286] 'count revisions from in-memory index tree'  (duration: 187.928303ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.175744Z","caller":"traceutil/trace.go:171","msg":"trace[1492403570] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"226.233377ms","start":"2024-07-17T18:51:29.949497Z","end":"2024-07-17T18:51:30.17573Z","steps":["trace[1492403570] 'process raft request'  (duration: 225.303587ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.176085Z","caller":"traceutil/trace.go:171","msg":"trace[2013162657] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"105.417756ms","start":"2024-07-17T18:51:30.070653Z","end":"2024-07-17T18:51:30.176071Z","steps":["trace[2013162657] 'process raft request'  (duration: 104.851644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:51:30.176214Z","caller":"traceutil/trace.go:171","msg":"trace[994084802] linearizableReadLoop","detail":"{readStateIndex:705; appliedIndex:703; }","duration":"106.945589ms","start":"2024-07-17T18:51:30.069262Z","end":"2024-07-17T18:51:30.176208Z","steps":["trace[994084802] 'read index received'  (duration: 289.857µs)","trace[994084802] 'applied index is now lower than readState.Index'  (duration: 106.654962ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T18:51:30.176399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.123758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-17T18:51:30.176438Z","caller":"traceutil/trace.go:171","msg":"trace[146943148] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:659; }","duration":"107.200457ms","start":"2024-07-17T18:51:30.069231Z","end":"2024-07-17T18:51:30.176431Z","steps":["trace[146943148] 'agreement among raft nodes before linearized reading'  (duration: 107.058806ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:51:30.176552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.233838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-17T18:51:30.176585Z","caller":"traceutil/trace.go:171","msg":"trace[1717280603] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:659; }","duration":"107.27985ms","start":"2024-07-17T18:51:30.0693Z","end":"2024-07-17T18:51:30.17658Z","steps":["trace[1717280603] 'agreement among raft nodes before linearized reading'  (duration: 107.227946ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:54:42.183531Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T18:54:42.183649Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-717026","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	{"level":"warn","ts":"2024-07-17T18:54:42.183739Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.183824Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.249901Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:54:42.250012Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:54:42.251601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"227d76f9723f8d84","current-leader-member-id":"227d76f9723f8d84"}
	{"level":"info","ts":"2024-07-17T18:54:42.254262Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:54:42.254444Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:54:42.254471Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-717026","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	
	
	==> etcd [ca42fc6a22e16e4a2c849c4b399cf1416ac11bff7401f8b5e7d09879b7f95557] <==
	{"level":"info","ts":"2024-07-17T18:56:25.446926Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:56:25.446937Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:56:25.446884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 switched to configuration voters=(2485273383114083716)"}
	{"level":"info","ts":"2024-07-17T18:56:25.447048Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4faa3c7cd4b30445","local-member-id":"227d76f9723f8d84","added-peer-id":"227d76f9723f8d84","added-peer-peer-urls":["https://192.168.39.122:2380"]}
	{"level":"info","ts":"2024-07-17T18:56:25.447172Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4faa3c7cd4b30445","local-member-id":"227d76f9723f8d84","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:56:25.447233Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:56:25.466695Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:56:25.466984Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"227d76f9723f8d84","initial-advertise-peer-urls":["https://192.168.39.122:2380"],"listen-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:56:25.467031Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:56:25.467122Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:56:25.467145Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2024-07-17T18:56:26.371405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 received MsgPreVoteResp from 227d76f9723f8d84 at term 2"}
	{"level":"info","ts":"2024-07-17T18:56:26.371514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371521Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 received MsgVoteResp from 227d76f9723f8d84 at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"227d76f9723f8d84 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.371536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 227d76f9723f8d84 elected leader 227d76f9723f8d84 at term 3"}
	{"level":"info","ts":"2024-07-17T18:56:26.377672Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"227d76f9723f8d84","local-member-attributes":"{Name:multinode-717026 ClientURLs:[https://192.168.39.122:2379]}","request-path":"/0/members/227d76f9723f8d84/attributes","cluster-id":"4faa3c7cd4b30445","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:56:26.377776Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:56:26.392228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.122:2379"}
	{"level":"info","ts":"2024-07-17T18:56:26.395229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:56:26.39545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:56:26.400601Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:56:26.401898Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:00:34 up 11 min,  0 users,  load average: 0.14, 0.16, 0.10
	Linux multinode-717026 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9ca075474ac25e2ab323c0e66a816afb9f0f55fc6fd98b42a1ffa7f9a14f9fbb] <==
	I0717 18:53:59.911806       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:09.912424       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:09.912456       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:09.912627       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:09.912658       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:09.912723       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:09.912754       1 main.go:303] handling current node
	I0717 18:54:19.906923       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:19.906977       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:19.907169       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:19.907228       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:19.907289       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:19.907408       1 main.go:303] handling current node
	I0717 18:54:29.915989       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:29.916102       1 main.go:303] handling current node
	I0717 18:54:29.916129       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:29.916157       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:29.916304       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:29.916325       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	I0717 18:54:39.911398       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:54:39.911487       1 main.go:303] handling current node
	I0717 18:54:39.911511       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:54:39.911518       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:54:39.911709       1 main.go:299] Handling node with IPs: map[192.168.39.198:{}]
	I0717 18:54:39.911735       1 main.go:326] Node multinode-717026-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bda6f98afceaeb088a1097df5f9dddc483a197ec1f4d27c1de623683df7dceb9] <==
	I0717 18:59:29.907760       1 main.go:303] handling current node
	I0717 18:59:39.911691       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:59:39.911789       1 main.go:303] handling current node
	I0717 18:59:39.911826       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:59:39.911831       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:59:49.906801       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:59:49.907768       1 main.go:303] handling current node
	I0717 18:59:49.907786       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:59:49.907793       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:59:59.906782       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:59:59.906859       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 18:59:59.907074       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 18:59:59.907129       1 main.go:303] handling current node
	I0717 19:00:09.913467       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 19:00:09.913593       1 main.go:303] handling current node
	I0717 19:00:09.913623       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 19:00:09.913679       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 19:00:19.916478       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 19:00:19.916520       1 main.go:303] handling current node
	I0717 19:00:19.916535       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 19:00:19.916540       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	I0717 19:00:29.906962       1 main.go:299] Handling node with IPs: map[192.168.39.122:{}]
	I0717 19:00:29.906988       1 main.go:303] handling current node
	I0717 19:00:29.907012       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 19:00:29.907017       1 main.go:326] Node multinode-717026-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [af6609edbfc9adc682e4e031907ae9d13380b5ee79245704dff50cbdecf54b4b] <==
	I0717 18:49:44.426901       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:49:44.519971       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 18:50:56.530908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38536: use of closed network connection
	E0717 18:50:56.715085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38558: use of closed network connection
	E0717 18:50:56.897013       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38584: use of closed network connection
	E0717 18:50:57.059144       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38604: use of closed network connection
	E0717 18:50:57.225832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38614: use of closed network connection
	E0717 18:50:57.502925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38660: use of closed network connection
	E0717 18:50:57.683832       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38672: use of closed network connection
	E0717 18:50:57.859957       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38682: use of closed network connection
	E0717 18:50:58.026017       1 conn.go:339] Error on socket receive: read tcp 192.168.39.122:8443->192.168.39.1:38702: use of closed network connection
	I0717 18:54:42.180091       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0717 18:54:42.210330       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211556       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211639       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.211692       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212044       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212513       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212578       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212628       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.212684       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.213483       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:54:42.213564       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0717 18:54:42.212126       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0717 18:54:42.213847       1 controller.go:84] Shutting down OpenAPI AggregationController
	
	
	==> kube-apiserver [cd93f6e85081e15c5b84892387d16c77bcef983a8b112108b45884e2d1c5e16f] <==
	I0717 18:56:27.896760       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:56:27.897288       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:56:27.897447       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:56:27.897733       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:56:27.897808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:56:27.905552       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:56:27.907002       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0717 18:56:27.912590       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 18:56:27.923672       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:56:27.923797       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:56:27.923840       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:56:27.923863       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:56:27.923886       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:56:27.931316       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:56:27.932978       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:56:27.933037       1 policy_source.go:224] refreshing policies
	I0717 18:56:27.988287       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:56:28.823735       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:56:30.226986       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:56:30.337081       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:56:30.355149       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:56:30.424191       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:56:30.434669       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:56:41.137122       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:56:41.427211       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [730b32413676a97354e3c2dab9aeb0a0e9fc6b21402593c4074e7b18f29b8556] <==
	I0717 18:50:29.078587       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m02\" does not exist"
	I0717 18:50:29.121570       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:50:33.874249       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-717026-m02"
	I0717 18:50:49.311645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:50:51.704696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.102954ms"
	I0717 18:50:51.721801       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.88667ms"
	I0717 18:50:51.743086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.233869ms"
	I0717 18:50:51.743289       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.66µs"
	I0717 18:50:55.218055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.175587ms"
	I0717 18:50:55.218400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="168.329µs"
	I0717 18:50:55.929261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.015433ms"
	I0717 18:50:55.929823       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.725µs"
	I0717 18:51:28.245724       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:51:28.249482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:51:28.285575       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:51:28.898969       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-717026-m03"
	I0717 18:51:47.613833       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:16.211089       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:17.280645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:52:17.280713       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:52:17.299581       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.3.0/24"]
	I0717 18:52:36.522034       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:53:13.952572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m03"
	I0717 18:53:14.011951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.343187ms"
	I0717 18:53:14.012158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.886µs"
	
	
	==> kube-controller-manager [e21a506be09da7e47b592e1f71f4ead3df58c1e7fd95f2067f7d9b65a8b30726] <==
	I0717 18:57:09.357698       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:57:11.120936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.501µs"
	I0717 18:57:11.235808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.174µs"
	I0717 18:57:11.275590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.819µs"
	I0717 18:57:11.289780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.239µs"
	I0717 18:57:11.308806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.975µs"
	I0717 18:57:11.317568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.902µs"
	I0717 18:57:11.319721       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.489µs"
	I0717 18:57:28.612573       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:28.636280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.986µs"
	I0717 18:57:28.649657       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.514µs"
	I0717 18:57:32.825567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.370955ms"
	I0717 18:57:32.826029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.943µs"
	I0717 18:57:46.819924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:47.899618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:57:47.899764       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717026-m03\" does not exist"
	I0717 18:57:47.909144       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-717026-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:58:07.125709       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:58:12.480848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-717026-m02"
	I0717 18:58:51.287282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.685924ms"
	I0717 18:58:51.287779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.419µs"
	I0717 18:59:01.101261       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7dmgp"
	I0717 18:59:01.131536       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-7dmgp"
	I0717 18:59:01.131597       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j4x2f"
	I0717 18:59:01.154875       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-j4x2f"
	
	
	==> kube-proxy [34b14c23bb1ca87f39f25f624aa953ed6eebc4fa2a9a2d74a52c1250d7389eb1] <==
	I0717 18:49:45.466768       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:49:45.476725       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.122"]
	I0717 18:49:45.521243       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:49:45.521290       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:49:45.521306       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:49:45.528070       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:49:45.528825       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:49:45.528853       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:49:45.529917       1 config.go:192] "Starting service config controller"
	I0717 18:49:45.529952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:49:45.529997       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:49:45.530019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:49:45.535197       1 config.go:319] "Starting node config controller"
	I0717 18:49:45.535224       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:49:45.630583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:49:45.630610       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:49:45.635695       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3c113fe5012415a8b4bc7042cacd41b98640a5ff67abfb4b142eece598706513] <==
	I0717 18:56:29.210327       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:56:29.236799       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.122"]
	I0717 18:56:29.295518       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:56:29.295609       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:56:29.295642       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:56:29.307078       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:56:29.307725       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:56:29.309512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:56:29.312173       1 config.go:192] "Starting service config controller"
	I0717 18:56:29.318130       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:56:29.315984       1 config.go:319] "Starting node config controller"
	I0717 18:56:29.319565       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:56:29.315464       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:56:29.321424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:56:29.419963       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:56:29.421452       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:56:29.423107       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [87cdb9250f0247ae0247c5ad252b317548321bfbece3d3081339a63799a3ee7f] <==
	I0717 18:56:26.320095       1 serving.go:380] Generated self-signed cert in-memory
	W0717 18:56:27.880408       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 18:56:27.880508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:56:27.880548       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 18:56:27.880582       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 18:56:27.904518       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 18:56:27.904607       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:56:27.910061       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 18:56:27.910100       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:56:27.910837       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 18:56:27.910907       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 18:56:28.010448       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bee098e6d7719dc5ca7f9781813c78ba808672dddb1563969fb4856133308685] <==
	E0717 18:49:28.332839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:28.331545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:49:28.333115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:49:28.333202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:49:28.333404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:49:28.333491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:49:28.335402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:49:28.335437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:49:29.149778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:49:29.149834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:49:29.354847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:29.354930       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:49:29.367845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:49:29.367900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:49:29.390193       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:49:29.390270       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:49:29.408518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:49:29.408565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:49:29.424130       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:49:29.424234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 18:49:32.319579       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:54:42.186997       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 18:54:42.187160       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0717 18:54:42.190414       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0717 18:54:42.190474       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.278906    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b3f31e4-5ec7-4731-87b0-a4082e52bfbc-xtables-lock\") pod \"kube-proxy-bvt54\" (UID: \"1b3f31e4-5ec7-4731-87b0-a4082e52bfbc\") " pod="kube-system/kube-proxy-bvt54"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.278989    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c980f2ac-1e0d-4c68-9f92-168a82001f8a-cni-cfg\") pod \"kindnet-d2dgx\" (UID: \"c980f2ac-1e0d-4c68-9f92-168a82001f8a\") " pod="kube-system/kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279025    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c980f2ac-1e0d-4c68-9f92-168a82001f8a-xtables-lock\") pod \"kindnet-d2dgx\" (UID: \"c980f2ac-1e0d-4c68-9f92-168a82001f8a\") " pod="kube-system/kindnet-d2dgx"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279172    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d3b9792-edc4-4e05-9403-e13289faba69-tmp\") pod \"storage-provisioner\" (UID: \"3d3b9792-edc4-4e05-9403-e13289faba69\") " pod="kube-system/storage-provisioner"
	Jul 17 18:56:28 multinode-717026 kubelet[3098]: I0717 18:56:28.279217    3098 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b3f31e4-5ec7-4731-87b0-a4082e52bfbc-lib-modules\") pod \"kube-proxy-bvt54\" (UID: \"1b3f31e4-5ec7-4731-87b0-a4082e52bfbc\") " pod="kube-system/kube-proxy-bvt54"
	Jul 17 18:57:24 multinode-717026 kubelet[3098]: E0717 18:57:24.386319    3098 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:57:24 multinode-717026 kubelet[3098]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:58:24 multinode-717026 kubelet[3098]: E0717 18:58:24.386949    3098 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:58:24 multinode-717026 kubelet[3098]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:58:24 multinode-717026 kubelet[3098]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:58:24 multinode-717026 kubelet[3098]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:58:24 multinode-717026 kubelet[3098]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:59:24 multinode-717026 kubelet[3098]: E0717 18:59:24.390511    3098 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:59:24 multinode-717026 kubelet[3098]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:59:24 multinode-717026 kubelet[3098]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:59:24 multinode-717026 kubelet[3098]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:59:24 multinode-717026 kubelet[3098]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:00:24 multinode-717026 kubelet[3098]: E0717 19:00:24.385876    3098 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:00:24 multinode-717026 kubelet[3098]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:00:24 multinode-717026 kubelet[3098]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:00:24 multinode-717026 kubelet[3098]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:00:24 multinode-717026 kubelet[3098]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:00:33.230023  431575 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19282-392903/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-717026 -n multinode-717026
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-717026 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.23s)

                                                
                                    
x
+
TestPreload (351.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-594120 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 19:05:05.952169  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:06:56.144527  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 19:07:13.091299  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-594120 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m28.360804633s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-594120 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-594120 image pull gcr.io/k8s-minikube/busybox: (3.084052657s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-594120
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-594120: exit status 82 (2m0.464705627s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-594120"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-594120 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-17 19:09:59.60863842 +0000 UTC m=+4055.925656065
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-594120 -n test-preload-594120
E0717 19:10:05.951892  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-594120 -n test-preload-594120: exit status 3 (18.500641813s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:10:18.104862  434759 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host
	E0717 19:10:18.104884  434759 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.216:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-594120" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-594120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-594120
--- FAIL: TestPreload (351.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (440.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.042133359s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-442321] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-442321" primary control-plane node in "kubernetes-upgrade-442321" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:12:14.248465  435843 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:12:14.248691  435843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:12:14.248706  435843 out.go:304] Setting ErrFile to fd 2...
	I0717 19:12:14.248713  435843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:12:14.249001  435843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:12:14.249883  435843 out.go:298] Setting JSON to false
	I0717 19:12:14.251193  435843 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10477,"bootTime":1721233057,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:12:14.251276  435843 start.go:139] virtualization: kvm guest
	I0717 19:12:14.254301  435843 out.go:177] * [kubernetes-upgrade-442321] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:12:14.256762  435843 notify.go:220] Checking for updates...
	I0717 19:12:14.257918  435843 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:12:14.260236  435843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:12:14.263451  435843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:12:14.265903  435843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:12:14.268134  435843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:12:14.270488  435843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:12:14.272103  435843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:12:14.309976  435843 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:12:14.311338  435843 start.go:297] selected driver: kvm2
	I0717 19:12:14.311366  435843 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:12:14.311381  435843 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:12:14.312561  435843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:12:14.312681  435843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:12:14.329463  435843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:12:14.329531  435843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:12:14.329857  435843 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 19:12:14.329897  435843 cni.go:84] Creating CNI manager for ""
	I0717 19:12:14.329908  435843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:12:14.329923  435843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:12:14.329992  435843 start.go:340] cluster config:
	{Name:kubernetes-upgrade-442321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-442321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:12:14.330126  435843 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:12:14.332119  435843 out.go:177] * Starting "kubernetes-upgrade-442321" primary control-plane node in "kubernetes-upgrade-442321" cluster
	I0717 19:12:14.333120  435843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:12:14.333160  435843 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:12:14.333182  435843 cache.go:56] Caching tarball of preloaded images
	I0717 19:12:14.333282  435843 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:12:14.333297  435843 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:12:14.333718  435843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/config.json ...
	I0717 19:12:14.333749  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/config.json: {Name:mk0692a3eb7b11c8244683d429ca8c196502ca66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:14.333915  435843 start.go:360] acquireMachinesLock for kubernetes-upgrade-442321: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:12:14.333970  435843 start.go:364] duration metric: took 31.812µs to acquireMachinesLock for "kubernetes-upgrade-442321"
	I0717 19:12:14.334003  435843 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-442321 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-442321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:12:14.334057  435843 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:12:14.335625  435843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:12:14.335782  435843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:12:14.335823  435843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:12:14.351357  435843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34761
	I0717 19:12:14.352005  435843 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:12:14.352699  435843 main.go:141] libmachine: Using API Version  1
	I0717 19:12:14.352724  435843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:12:14.353108  435843 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:12:14.353290  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetMachineName
	I0717 19:12:14.353480  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:14.353641  435843 start.go:159] libmachine.API.Create for "kubernetes-upgrade-442321" (driver="kvm2")
	I0717 19:12:14.353667  435843 client.go:168] LocalClient.Create starting
	I0717 19:12:14.353695  435843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 19:12:14.353725  435843 main.go:141] libmachine: Decoding PEM data...
	I0717 19:12:14.353741  435843 main.go:141] libmachine: Parsing certificate...
	I0717 19:12:14.353786  435843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 19:12:14.353811  435843 main.go:141] libmachine: Decoding PEM data...
	I0717 19:12:14.353824  435843 main.go:141] libmachine: Parsing certificate...
	I0717 19:12:14.353840  435843 main.go:141] libmachine: Running pre-create checks...
	I0717 19:12:14.353853  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .PreCreateCheck
	I0717 19:12:14.354281  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetConfigRaw
	I0717 19:12:14.354647  435843 main.go:141] libmachine: Creating machine...
	I0717 19:12:14.354663  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Create
	I0717 19:12:14.354816  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Creating KVM machine...
	I0717 19:12:14.356176  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found existing default KVM network
	I0717 19:12:14.356990  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:14.356834  435903 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091c0}
	I0717 19:12:14.357019  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | created network xml: 
	I0717 19:12:14.357033  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | <network>
	I0717 19:12:14.357042  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   <name>mk-kubernetes-upgrade-442321</name>
	I0717 19:12:14.357052  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   <dns enable='no'/>
	I0717 19:12:14.357065  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   
	I0717 19:12:14.357078  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 19:12:14.357089  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |     <dhcp>
	I0717 19:12:14.357120  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 19:12:14.357151  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |     </dhcp>
	I0717 19:12:14.357166  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   </ip>
	I0717 19:12:14.357177  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG |   
	I0717 19:12:14.357193  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | </network>
	I0717 19:12:14.357203  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | 
	I0717 19:12:14.361934  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | trying to create private KVM network mk-kubernetes-upgrade-442321 192.168.39.0/24...
	I0717 19:12:14.429830  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | private KVM network mk-kubernetes-upgrade-442321 192.168.39.0/24 created
	I0717 19:12:14.429863  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:14.429804  435903 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:12:14.429877  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321 ...
	I0717 19:12:14.429895  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:12:14.429972  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:12:14.691760  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:14.691634  435903 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa...
	I0717 19:12:14.816694  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:14.816578  435903 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/kubernetes-upgrade-442321.rawdisk...
	I0717 19:12:14.816726  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Writing magic tar header
	I0717 19:12:14.816742  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Writing SSH key tar header
	I0717 19:12:14.816750  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:14.816698  435903 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321 ...
	I0717 19:12:14.816775  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321
	I0717 19:12:14.816842  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 19:12:14.816878  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321 (perms=drwx------)
	I0717 19:12:14.816898  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:12:14.816908  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:12:14.816925  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 19:12:14.816936  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 19:12:14.816948  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:12:14.816964  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 19:12:14.816979  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:12:14.816988  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:12:14.817011  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:12:14.817024  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Checking permissions on dir: /home
	I0717 19:12:14.817038  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Skipping /home - not owner
	I0717 19:12:14.817051  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Creating domain...
	I0717 19:12:14.817954  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) define libvirt domain using xml: 
	I0717 19:12:14.817976  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) <domain type='kvm'>
	I0717 19:12:14.817983  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <name>kubernetes-upgrade-442321</name>
	I0717 19:12:14.817990  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <memory unit='MiB'>2200</memory>
	I0717 19:12:14.817995  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <vcpu>2</vcpu>
	I0717 19:12:14.817999  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <features>
	I0717 19:12:14.818005  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <acpi/>
	I0717 19:12:14.818009  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <apic/>
	I0717 19:12:14.818016  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <pae/>
	I0717 19:12:14.818028  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     
	I0717 19:12:14.818038  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   </features>
	I0717 19:12:14.818050  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <cpu mode='host-passthrough'>
	I0717 19:12:14.818057  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   
	I0717 19:12:14.818063  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   </cpu>
	I0717 19:12:14.818089  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <os>
	I0717 19:12:14.818114  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <type>hvm</type>
	I0717 19:12:14.818125  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <boot dev='cdrom'/>
	I0717 19:12:14.818135  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <boot dev='hd'/>
	I0717 19:12:14.818152  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <bootmenu enable='no'/>
	I0717 19:12:14.818160  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   </os>
	I0717 19:12:14.818165  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   <devices>
	I0717 19:12:14.818173  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <disk type='file' device='cdrom'>
	I0717 19:12:14.818187  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/boot2docker.iso'/>
	I0717 19:12:14.818194  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <target dev='hdc' bus='scsi'/>
	I0717 19:12:14.818200  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <readonly/>
	I0717 19:12:14.818210  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </disk>
	I0717 19:12:14.818229  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <disk type='file' device='disk'>
	I0717 19:12:14.818244  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:12:14.818257  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/kubernetes-upgrade-442321.rawdisk'/>
	I0717 19:12:14.818276  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <target dev='hda' bus='virtio'/>
	I0717 19:12:14.818289  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </disk>
	I0717 19:12:14.818304  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <interface type='network'>
	I0717 19:12:14.818318  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <source network='mk-kubernetes-upgrade-442321'/>
	I0717 19:12:14.818326  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <model type='virtio'/>
	I0717 19:12:14.818337  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </interface>
	I0717 19:12:14.818346  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <interface type='network'>
	I0717 19:12:14.818355  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <source network='default'/>
	I0717 19:12:14.818363  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <model type='virtio'/>
	I0717 19:12:14.818376  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </interface>
	I0717 19:12:14.818391  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <serial type='pty'>
	I0717 19:12:14.818406  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <target port='0'/>
	I0717 19:12:14.818416  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </serial>
	I0717 19:12:14.818425  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <console type='pty'>
	I0717 19:12:14.818435  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <target type='serial' port='0'/>
	I0717 19:12:14.818440  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </console>
	I0717 19:12:14.818451  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     <rng model='virtio'>
	I0717 19:12:14.818463  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)       <backend model='random'>/dev/random</backend>
	I0717 19:12:14.818477  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     </rng>
	I0717 19:12:14.818488  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     
	I0717 19:12:14.818498  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)     
	I0717 19:12:14.818511  435843 main.go:141] libmachine: (kubernetes-upgrade-442321)   </devices>
	I0717 19:12:14.818519  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) </domain>
	I0717 19:12:14.818528  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) 
	I0717 19:12:14.822709  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:30:4e:b1 in network default
	I0717 19:12:14.823306  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Ensuring networks are active...
	I0717 19:12:14.823340  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:14.824070  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Ensuring network default is active
	I0717 19:12:14.824348  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Ensuring network mk-kubernetes-upgrade-442321 is active
	I0717 19:12:14.824995  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Getting domain xml...
	I0717 19:12:14.825693  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Creating domain...
	I0717 19:12:16.057143  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Waiting to get IP...
	I0717 19:12:16.058543  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.059455  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.059489  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:16.059417  435903 retry.go:31] will retry after 213.949058ms: waiting for machine to come up
	I0717 19:12:16.274852  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.275393  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.275414  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:16.275340  435903 retry.go:31] will retry after 262.484568ms: waiting for machine to come up
	I0717 19:12:16.539749  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.540317  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.540342  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:16.540271  435903 retry.go:31] will retry after 338.995181ms: waiting for machine to come up
	I0717 19:12:16.880766  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.881196  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:16.881224  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:16.881151  435903 retry.go:31] will retry after 469.587362ms: waiting for machine to come up
	I0717 19:12:17.352893  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:17.353384  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:17.353415  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:17.353334  435903 retry.go:31] will retry after 498.530154ms: waiting for machine to come up
	I0717 19:12:17.853087  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:17.853455  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:17.853481  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:17.853405  435903 retry.go:31] will retry after 582.680617ms: waiting for machine to come up
	I0717 19:12:18.437232  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:18.437613  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:18.437670  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:18.437563  435903 retry.go:31] will retry after 803.454891ms: waiting for machine to come up
	I0717 19:12:19.242319  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:19.242819  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:19.242998  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:19.242916  435903 retry.go:31] will retry after 1.446284618s: waiting for machine to come up
	I0717 19:12:20.690736  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:20.691123  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:20.691153  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:20.691064  435903 retry.go:31] will retry after 1.411367519s: waiting for machine to come up
	I0717 19:12:22.104573  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:22.105021  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:22.105045  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:22.105001  435903 retry.go:31] will retry after 1.832413726s: waiting for machine to come up
	I0717 19:12:23.938569  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:23.938954  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:23.938989  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:23.938896  435903 retry.go:31] will retry after 1.9512458s: waiting for machine to come up
	I0717 19:12:25.891433  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:25.891917  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:25.891950  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:25.891872  435903 retry.go:31] will retry after 3.132745353s: waiting for machine to come up
	I0717 19:12:29.026545  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:29.026961  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:29.026986  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:29.026918  435903 retry.go:31] will retry after 3.434467095s: waiting for machine to come up
	I0717 19:12:32.465049  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:32.465440  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find current IP address of domain kubernetes-upgrade-442321 in network mk-kubernetes-upgrade-442321
	I0717 19:12:32.465464  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | I0717 19:12:32.465391  435903 retry.go:31] will retry after 3.53670116s: waiting for machine to come up
	I0717 19:12:36.004578  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.005044  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has current primary IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.005069  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Found IP for machine: 192.168.39.49
	I0717 19:12:36.005082  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Reserving static IP address...
	I0717 19:12:36.005394  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-442321", mac: "52:54:00:0a:8f:52", ip: "192.168.39.49"} in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.082676  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Getting to WaitForSSH function...
	I0717 19:12:36.082715  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Reserved static IP address: 192.168.39.49
	I0717 19:12:36.082762  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Waiting for SSH to be available...
	I0717 19:12:36.085182  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.085706  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.085735  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.085939  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Using SSH client type: external
	I0717 19:12:36.085971  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa (-rw-------)
	I0717 19:12:36.086015  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:12:36.086033  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | About to run SSH command:
	I0717 19:12:36.086042  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | exit 0
	I0717 19:12:36.212673  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | SSH cmd err, output: <nil>: 
	I0717 19:12:36.212959  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) KVM machine creation complete!
	I0717 19:12:36.213222  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetConfigRaw
	I0717 19:12:36.213755  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:36.213958  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:36.214116  435843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:12:36.214131  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetState
	I0717 19:12:36.215669  435843 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:12:36.215685  435843 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:12:36.215693  435843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:12:36.215702  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.218205  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.218550  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.218590  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.218716  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:36.218911  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.219078  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.219295  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:36.219496  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:36.219762  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:36.219783  435843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:12:36.331761  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:12:36.331785  435843 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:12:36.331796  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.334216  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.334533  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.334563  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.334689  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:36.334905  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.335071  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.335166  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:36.335451  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:36.335716  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:36.335738  435843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:12:36.449166  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 19:12:36.449268  435843 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:12:36.449284  435843 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:12:36.449297  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetMachineName
	I0717 19:12:36.449557  435843 buildroot.go:166] provisioning hostname "kubernetes-upgrade-442321"
	I0717 19:12:36.449589  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetMachineName
	I0717 19:12:36.449798  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.452283  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.452621  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.452638  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.452787  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:36.452961  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.453114  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.453275  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:36.453416  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:36.453617  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:36.453634  435843 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-442321 && echo "kubernetes-upgrade-442321" | sudo tee /etc/hostname
	I0717 19:12:36.578722  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-442321
	
	I0717 19:12:36.578747  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.581380  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.581757  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.581802  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.582088  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:36.582285  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.582463  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.582597  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:36.582765  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:36.582953  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:36.582970  435843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-442321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-442321/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-442321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:12:36.711294  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:12:36.711323  435843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:12:36.711355  435843 buildroot.go:174] setting up certificates
	I0717 19:12:36.711365  435843 provision.go:84] configureAuth start
	I0717 19:12:36.711374  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetMachineName
	I0717 19:12:36.711635  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetIP
	I0717 19:12:36.714292  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.714593  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.714628  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.714764  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.716742  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.717102  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.717125  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.717231  435843 provision.go:143] copyHostCerts
	I0717 19:12:36.717286  435843 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:12:36.717296  435843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:12:36.717350  435843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:12:36.717442  435843 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:12:36.717454  435843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:12:36.717476  435843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:12:36.717539  435843 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:12:36.717546  435843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:12:36.717565  435843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:12:36.717622  435843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-442321 san=[127.0.0.1 192.168.39.49 kubernetes-upgrade-442321 localhost minikube]
	I0717 19:12:36.938433  435843 provision.go:177] copyRemoteCerts
	I0717 19:12:36.938495  435843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:12:36.938521  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:36.941115  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.941426  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:36.941458  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:36.941623  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:36.941842  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:36.942008  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:36.942134  435843 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:12:37.033798  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:12:37.064417  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:12:37.088660  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:12:37.111376  435843 provision.go:87] duration metric: took 399.996246ms to configureAuth
	I0717 19:12:37.111414  435843 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:12:37.111612  435843 config.go:182] Loaded profile config "kubernetes-upgrade-442321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:12:37.111698  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:37.114304  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.114805  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.114859  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.114918  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:37.115146  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.115334  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.115470  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:37.115629  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:37.115833  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:37.115850  435843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:12:37.398581  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:12:37.398607  435843 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:12:37.398616  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetURL
	I0717 19:12:37.399807  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Using libvirt version 6000000
	I0717 19:12:37.402018  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.402347  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.402377  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.402565  435843 main.go:141] libmachine: Docker is up and running!
	I0717 19:12:37.402583  435843 main.go:141] libmachine: Reticulating splines...
	I0717 19:12:37.402592  435843 client.go:171] duration metric: took 23.048916069s to LocalClient.Create
	I0717 19:12:37.402619  435843 start.go:167] duration metric: took 23.048978855s to libmachine.API.Create "kubernetes-upgrade-442321"
	I0717 19:12:37.402630  435843 start.go:293] postStartSetup for "kubernetes-upgrade-442321" (driver="kvm2")
	I0717 19:12:37.402642  435843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:12:37.402661  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:37.402921  435843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:12:37.402943  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:37.405316  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.405709  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.405741  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.405892  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:37.406079  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.406327  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:37.406490  435843 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:12:37.491856  435843 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:12:37.496502  435843 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:12:37.496528  435843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:12:37.496605  435843 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:12:37.496732  435843 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:12:37.496878  435843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:12:37.507558  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:12:37.530060  435843 start.go:296] duration metric: took 127.413355ms for postStartSetup
	I0717 19:12:37.530114  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetConfigRaw
	I0717 19:12:37.530664  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetIP
	I0717 19:12:37.533280  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.533604  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.533628  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.533854  435843 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/config.json ...
	I0717 19:12:37.534024  435843 start.go:128] duration metric: took 23.199951956s to createHost
	I0717 19:12:37.534045  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:37.535965  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.536244  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.536271  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.536433  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:37.536641  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.536843  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.537017  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:37.537202  435843 main.go:141] libmachine: Using SSH client type: native
	I0717 19:12:37.537407  435843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0717 19:12:37.537424  435843 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:12:37.653728  435843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721243557.628856849
	
	I0717 19:12:37.653755  435843 fix.go:216] guest clock: 1721243557.628856849
	I0717 19:12:37.653765  435843 fix.go:229] Guest: 2024-07-17 19:12:37.628856849 +0000 UTC Remote: 2024-07-17 19:12:37.5340353 +0000 UTC m=+23.332935059 (delta=94.821549ms)
	I0717 19:12:37.653796  435843 fix.go:200] guest clock delta is within tolerance: 94.821549ms
	I0717 19:12:37.653818  435843 start.go:83] releasing machines lock for "kubernetes-upgrade-442321", held for 23.319821522s
	I0717 19:12:37.653851  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:37.654126  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetIP
	I0717 19:12:37.656789  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.657166  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.657200  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.657418  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:37.657952  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:37.658159  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:12:37.658261  435843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:12:37.658317  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:37.658385  435843 ssh_runner.go:195] Run: cat /version.json
	I0717 19:12:37.658412  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:12:37.661302  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.661693  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.661727  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.661767  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.661900  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:37.662104  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.662234  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:37.662259  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:37.662295  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:37.662436  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:12:37.662527  435843 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:12:37.662598  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:12:37.662713  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:12:37.662853  435843 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:12:37.776105  435843 ssh_runner.go:195] Run: systemctl --version
	I0717 19:12:37.785866  435843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:12:37.951252  435843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:12:37.958003  435843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:12:37.958083  435843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:12:37.974689  435843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:12:37.974713  435843 start.go:495] detecting cgroup driver to use...
	I0717 19:12:37.974789  435843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:12:37.991750  435843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:12:38.005785  435843 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:12:38.005844  435843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:12:38.025817  435843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:12:38.040317  435843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:12:38.166472  435843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:12:38.310139  435843 docker.go:233] disabling docker service ...
	I0717 19:12:38.310226  435843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:12:38.324756  435843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:12:38.339505  435843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:12:38.482118  435843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:12:38.597084  435843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:12:38.616800  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:12:38.635681  435843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:12:38.635751  435843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:12:38.645421  435843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:12:38.645474  435843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:12:38.655724  435843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:12:38.665444  435843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:12:38.675207  435843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:12:38.685273  435843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:12:38.695052  435843 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:12:38.695096  435843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:12:38.710697  435843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:12:38.721446  435843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:12:38.856702  435843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:12:39.006568  435843 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:12:39.006633  435843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:12:39.013579  435843 start.go:563] Will wait 60s for crictl version
	I0717 19:12:39.013646  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:39.017936  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:12:39.064460  435843 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:12:39.064573  435843 ssh_runner.go:195] Run: crio --version
	I0717 19:12:39.095785  435843 ssh_runner.go:195] Run: crio --version
	I0717 19:12:39.128937  435843 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:12:39.130401  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetIP
	I0717 19:12:39.133698  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:39.134317  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:12:39.134350  435843 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:12:39.134668  435843 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:12:39.139697  435843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:12:39.154097  435843 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-442321 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-442321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:12:39.154244  435843 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:12:39.154320  435843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:12:39.187570  435843 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:12:39.187653  435843 ssh_runner.go:195] Run: which lz4
	I0717 19:12:39.191851  435843 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 19:12:39.196086  435843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:12:39.196120  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:12:40.943306  435843 crio.go:462] duration metric: took 1.751493666s to copy over tarball
	I0717 19:12:40.943397  435843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:12:43.453815  435843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.51037765s)
	I0717 19:12:43.453856  435843 crio.go:469] duration metric: took 2.510507364s to extract the tarball
	I0717 19:12:43.453866  435843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:12:43.499086  435843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:12:43.545624  435843 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:12:43.545652  435843 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:12:43.545726  435843 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:12:43.545737  435843 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:12:43.545749  435843 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:12:43.545771  435843 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:12:43.545797  435843 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:12:43.545791  435843 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:12:43.545750  435843 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:12:43.545848  435843 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:12:43.547592  435843 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:12:43.547606  435843 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:12:43.547619  435843 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:12:43.547622  435843 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:12:43.547593  435843 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:12:43.547601  435843 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:12:43.547625  435843 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:12:43.547596  435843 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:12:43.714320  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:12:43.753596  435843 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:12:43.753651  435843 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:12:43.753697  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:43.755020  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:12:43.757970  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:12:43.801991  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:12:43.816651  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:12:43.816683  435843 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:12:43.816723  435843 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:12:43.816778  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:43.847353  435843 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:12:43.847395  435843 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:12:43.847433  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:43.847439  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:12:43.879971  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:12:43.880060  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:12:43.895269  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:12:43.901819  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:12:43.908393  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:12:43.939034  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:12:43.959643  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:12:43.999907  435843 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:12:43.999973  435843 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:12:43.999930  435843 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:12:44.000086  435843 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:12:44.000031  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:44.000141  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:44.017337  435843 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:12:44.017384  435843 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:12:44.017436  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:44.032996  435843 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:12:44.033051  435843 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:12:44.033058  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:12:44.033079  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:12:44.033136  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:12:44.033085  435843 ssh_runner.go:195] Run: which crictl
	I0717 19:12:44.095163  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:12:44.095614  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:12:44.095786  435843 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:12:44.097298  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:12:44.128873  435843 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:12:44.468310  435843 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:12:44.611776  435843 cache_images.go:92] duration metric: took 1.066103694s to LoadCachedImages
	W0717 19:12:44.611896  435843 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0717 19:12:44.611917  435843 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.20.0 crio true true} ...
	I0717 19:12:44.612061  435843 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-442321 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-442321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:12:44.612153  435843 ssh_runner.go:195] Run: crio config
	I0717 19:12:44.666374  435843 cni.go:84] Creating CNI manager for ""
	I0717 19:12:44.666405  435843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:12:44.666416  435843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:12:44.666441  435843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-442321 NodeName:kubernetes-upgrade-442321 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:12:44.666605  435843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-442321"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:12:44.666677  435843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:12:44.677355  435843 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:12:44.677429  435843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:12:44.687203  435843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0717 19:12:44.703628  435843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:12:44.719977  435843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:12:44.736799  435843 ssh_runner.go:195] Run: grep 192.168.39.49	control-plane.minikube.internal$ /etc/hosts
	I0717 19:12:44.740634  435843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:12:44.753157  435843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:12:44.873678  435843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:12:44.890949  435843 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321 for IP: 192.168.39.49
	I0717 19:12:44.890976  435843 certs.go:194] generating shared ca certs ...
	I0717 19:12:44.890999  435843 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:44.891176  435843 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:12:44.891225  435843 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:12:44.891236  435843 certs.go:256] generating profile certs ...
	I0717 19:12:44.891304  435843 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.key
	I0717 19:12:44.891323  435843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.crt with IP's: []
	I0717 19:12:44.998847  435843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.crt ...
	I0717 19:12:44.998884  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.crt: {Name:mkd26bc0e5746d36cb83525988af66c2ab869132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:44.999119  435843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.key ...
	I0717 19:12:44.999144  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.key: {Name:mkf5d6884d742471c2c9610e1e3c11dcaec12399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:44.999261  435843 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key.2c89862e
	I0717 19:12:44.999280  435843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt.2c89862e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49]
	I0717 19:12:45.185181  435843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt.2c89862e ...
	I0717 19:12:45.185218  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt.2c89862e: {Name:mkd12014861cb55aa08dfd674655058c8ccdbf9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:45.185389  435843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key.2c89862e ...
	I0717 19:12:45.185403  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key.2c89862e: {Name:mke2c921b61c1ed2b2f93a4749722113478c6a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:45.185489  435843 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt.2c89862e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt
	I0717 19:12:45.185572  435843 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key.2c89862e -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key
	I0717 19:12:45.185633  435843 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.key
	I0717 19:12:45.185651  435843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.crt with IP's: []
	I0717 19:12:45.371354  435843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.crt ...
	I0717 19:12:45.371391  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.crt: {Name:mk4ce37117dde13e332d3aabf7ff0af3e4b5e03c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:45.371565  435843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.key ...
	I0717 19:12:45.371579  435843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.key: {Name:mk7e18e40115c9814c8436a130bcac1b827e0342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:12:45.371772  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:12:45.371821  435843 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:12:45.371834  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:12:45.371864  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:12:45.371894  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:12:45.371918  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:12:45.371959  435843 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:12:45.372902  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:12:45.400141  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:12:45.424716  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:12:45.451147  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:12:45.477125  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 19:12:45.500902  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:12:45.524899  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:12:45.548839  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:12:45.573601  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:12:45.597430  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:12:45.620514  435843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:12:45.643770  435843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:12:45.660396  435843 ssh_runner.go:195] Run: openssl version
	I0717 19:12:45.666329  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:12:45.677100  435843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:12:45.681698  435843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:12:45.681766  435843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:12:45.687692  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:12:45.698678  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:12:45.712638  435843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:12:45.717382  435843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:12:45.717454  435843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:12:45.731866  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:12:45.747103  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:12:45.760421  435843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:12:45.766967  435843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:12:45.767030  435843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:12:45.773214  435843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:12:45.786527  435843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:12:45.791097  435843 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:12:45.791164  435843 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-442321 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-442321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:12:45.791254  435843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:12:45.791308  435843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:12:45.839652  435843 cri.go:89] found id: ""
	I0717 19:12:45.839762  435843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:12:45.849698  435843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:12:45.859601  435843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:12:45.869153  435843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:12:45.869175  435843 kubeadm.go:157] found existing configuration files:
	
	I0717 19:12:45.869217  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:12:45.878217  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:12:45.878269  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:12:45.887480  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:12:45.896195  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:12:45.896249  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:12:45.906189  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:12:45.915961  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:12:45.916018  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:12:45.925909  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:12:45.935040  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:12:45.935105  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:12:45.946878  435843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:12:46.063377  435843 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:12:46.063562  435843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:12:46.212844  435843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:12:46.213007  435843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:12:46.213169  435843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:12:46.415713  435843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:12:46.543882  435843 out.go:204]   - Generating certificates and keys ...
	I0717 19:12:46.544016  435843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:12:46.544096  435843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:12:46.655036  435843 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:12:46.933586  435843 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:12:47.146094  435843 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:12:47.261222  435843 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:12:47.370809  435843 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:12:47.371019  435843 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0717 19:12:47.740119  435843 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:12:47.740377  435843 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0717 19:12:47.839212  435843 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:12:48.055050  435843 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:12:48.268688  435843 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:12:48.268954  435843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:12:48.430768  435843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:12:48.865596  435843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:12:48.929482  435843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:12:49.187184  435843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:12:49.208564  435843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:12:49.210002  435843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:12:49.210068  435843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:12:49.353679  435843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:12:49.355625  435843 out.go:204]   - Booting up control plane ...
	I0717 19:12:49.355758  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:12:49.359179  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:12:49.360705  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:12:49.361936  435843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:12:49.372734  435843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:13:29.365876  435843 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:13:29.366229  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:13:29.366515  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:13:34.366927  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:13:34.367126  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:13:44.366821  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:13:44.367091  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:14:04.366722  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:14:04.367054  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:14:44.368141  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:14:44.368437  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:14:44.368502  435843 kubeadm.go:310] 
	I0717 19:14:44.368586  435843 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:14:44.368653  435843 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:14:44.368662  435843 kubeadm.go:310] 
	I0717 19:14:44.368720  435843 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:14:44.368770  435843 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:14:44.368925  435843 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:14:44.368936  435843 kubeadm.go:310] 
	I0717 19:14:44.369079  435843 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:14:44.369210  435843 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:14:44.369272  435843 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:14:44.369284  435843 kubeadm.go:310] 
	I0717 19:14:44.369447  435843 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:14:44.369568  435843 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:14:44.369599  435843 kubeadm.go:310] 
	I0717 19:14:44.369742  435843 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:14:44.369866  435843 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:14:44.369946  435843 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:14:44.370038  435843 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:14:44.370052  435843 kubeadm.go:310] 
	I0717 19:14:44.370749  435843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:14:44.370881  435843 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:14:44.371023  435843 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 19:14:44.371129  435843 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-442321 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:14:44.371178  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:14:45.084709  435843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:14:45.098735  435843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:14:45.108385  435843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:14:45.108404  435843 kubeadm.go:157] found existing configuration files:
	
	I0717 19:14:45.108448  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:14:45.117420  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:14:45.117476  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:14:45.126618  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:14:45.135280  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:14:45.135353  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:14:45.144646  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:14:45.154793  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:14:45.154843  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:14:45.164201  435843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:14:45.172866  435843 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:14:45.172928  435843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:14:45.182105  435843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:14:45.242758  435843 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:14:45.242839  435843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:14:45.390062  435843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:14:45.390221  435843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:14:45.390339  435843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:14:45.569315  435843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:14:45.571558  435843 out.go:204]   - Generating certificates and keys ...
	I0717 19:14:45.571675  435843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:14:45.571737  435843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:14:45.571848  435843 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:14:45.571947  435843 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:14:45.572041  435843 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:14:45.572114  435843 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:14:45.572198  435843 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:14:45.572306  435843 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:14:45.572391  435843 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:14:45.572655  435843 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:14:45.572775  435843 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:14:45.572869  435843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:14:45.790884  435843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:14:45.973213  435843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:14:46.071337  435843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:14:46.164580  435843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:14:46.186876  435843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:14:46.187969  435843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:14:46.188087  435843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:14:46.337280  435843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:14:46.339319  435843 out.go:204]   - Booting up control plane ...
	I0717 19:14:46.339436  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:14:46.340414  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:14:46.342129  435843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:14:46.342943  435843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:14:46.345337  435843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:15:26.348061  435843 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:15:26.348506  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:15:26.348732  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:15:31.349148  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:15:31.349392  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:15:41.350035  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:15:41.350311  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:16:01.349289  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:16:01.349536  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:16:41.349113  435843 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:16:41.349362  435843 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:16:41.349374  435843 kubeadm.go:310] 
	I0717 19:16:41.349427  435843 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:16:41.349512  435843 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:16:41.349521  435843 kubeadm.go:310] 
	I0717 19:16:41.349562  435843 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:16:41.349605  435843 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:16:41.349729  435843 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:16:41.349737  435843 kubeadm.go:310] 
	I0717 19:16:41.349873  435843 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:16:41.349915  435843 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:16:41.349955  435843 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:16:41.349962  435843 kubeadm.go:310] 
	I0717 19:16:41.350084  435843 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:16:41.350185  435843 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:16:41.350192  435843 kubeadm.go:310] 
	I0717 19:16:41.350332  435843 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:16:41.350440  435843 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:16:41.350536  435843 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:16:41.350623  435843 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:16:41.350632  435843 kubeadm.go:310] 
	I0717 19:16:41.351615  435843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:16:41.351731  435843 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:16:41.351814  435843 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:16:41.351898  435843 kubeadm.go:394] duration metric: took 3m55.560738295s to StartCluster
	I0717 19:16:41.351956  435843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:16:41.352028  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:16:41.409972  435843 cri.go:89] found id: ""
	I0717 19:16:41.410014  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.410025  435843 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:16:41.410033  435843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:16:41.410100  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:16:41.465836  435843 cri.go:89] found id: ""
	I0717 19:16:41.465873  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.465885  435843 logs.go:278] No container was found matching "etcd"
	I0717 19:16:41.465892  435843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:16:41.465959  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:16:41.514482  435843 cri.go:89] found id: ""
	I0717 19:16:41.514517  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.514530  435843 logs.go:278] No container was found matching "coredns"
	I0717 19:16:41.514538  435843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:16:41.514610  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:16:41.566949  435843 cri.go:89] found id: ""
	I0717 19:16:41.566984  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.566993  435843 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:16:41.567000  435843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:16:41.567076  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:16:41.616853  435843 cri.go:89] found id: ""
	I0717 19:16:41.616890  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.616903  435843 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:16:41.616913  435843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:16:41.617010  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:16:41.667129  435843 cri.go:89] found id: ""
	I0717 19:16:41.667163  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.667174  435843 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:16:41.667183  435843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:16:41.667250  435843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:16:41.713424  435843 cri.go:89] found id: ""
	I0717 19:16:41.713468  435843 logs.go:276] 0 containers: []
	W0717 19:16:41.713480  435843 logs.go:278] No container was found matching "kindnet"
	I0717 19:16:41.713493  435843 logs.go:123] Gathering logs for kubelet ...
	I0717 19:16:41.713518  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:16:41.785809  435843 logs.go:123] Gathering logs for dmesg ...
	I0717 19:16:41.785848  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:16:41.805479  435843 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:16:41.805520  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:16:42.037096  435843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:16:42.037131  435843 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:16:42.037150  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:16:42.172168  435843 logs.go:123] Gathering logs for container status ...
	I0717 19:16:42.172202  435843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:16:42.223276  435843 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:16:42.223343  435843 out.go:239] * 
	* 
	W0717 19:16:42.223419  435843 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:16:42.223451  435843 out.go:239] * 
	* 
	W0717 19:16:42.224703  435843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:16:42.228340  435843 out.go:177] 
	W0717 19:16:42.229523  435843 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:16:42.229608  435843 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:16:42.229651  435843 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:16:42.231768  435843 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-442321
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-442321: (1.526680385s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-442321 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-442321 status --format={{.Host}}: exit status 7 (73.347076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.037630837s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-442321 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.679854ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-442321] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-442321
	    minikube start -p kubernetes-upgrade-442321 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4423212 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-442321 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-442321 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.356045455s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-17 19:19:31.44639675 +0000 UTC m=+4627.763414404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-442321 -n kubernetes-upgrade-442321
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-442321 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-442321 logs -n 25: (1.761132383s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-369638 sudo                  | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo cat              | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo cat              | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo                  | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo                  | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo                  | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo find             | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-369638 sudo crio             | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-369638                       | cilium-369638             | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:16 UTC |
	| start   | -p cert-expiration-012081              | cert-expiration-012081    | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:16 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-919742           | force-systemd-flag-919742 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:17 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-788788              | stopped-upgrade-788788    | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:16 UTC |
	| start   | -p cert-options-597798                 | cert-options-597798       | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:17 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-442321           | kubernetes-upgrade-442321 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:16 UTC |
	| start   | -p kubernetes-upgrade-442321           | kubernetes-upgrade-442321 | jenkins | v1.33.1 | 17 Jul 24 19:16 UTC | 17 Jul 24 19:18 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-919742 ssh cat      | force-systemd-flag-919742 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-919742           | force-systemd-flag-919742 | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | -p pause-744869 --memory=2048          | pause-744869              | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:19 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-597798 ssh                | cert-options-597798       | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-597798 -- sudo         | cert-options-597798       | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-597798                 | cert-options-597798       | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC | 17 Jul 24 19:17 UTC |
	| start   | -p auto-369638 --memory=3072           | auto-369638               | jenkins | v1.33.1 | 17 Jul 24 19:17 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-442321           | kubernetes-upgrade-442321 | jenkins | v1.33.1 | 17 Jul 24 19:18 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-442321           | kubernetes-upgrade-442321 | jenkins | v1.33.1 | 17 Jul 24 19:18 UTC | 17 Jul 24 19:19 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0    |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-744869                        | pause-744869              | jenkins | v1.33.1 | 17 Jul 24 19:19 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:19:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:19:26.145567  444180 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:19:26.145820  444180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:19:26.145833  444180 out.go:304] Setting ErrFile to fd 2...
	I0717 19:19:26.145837  444180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:19:26.146066  444180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:19:26.146629  444180 out.go:298] Setting JSON to false
	I0717 19:19:26.147832  444180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10909,"bootTime":1721233057,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:19:26.147889  444180 start.go:139] virtualization: kvm guest
	I0717 19:19:26.151026  444180 out.go:177] * [pause-744869] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:19:26.152540  444180 notify.go:220] Checking for updates...
	I0717 19:19:26.152579  444180 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:19:26.154154  444180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:19:26.155587  444180 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:19:26.156942  444180 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:19:26.158136  444180 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:19:26.159476  444180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:19:26.161371  444180 config.go:182] Loaded profile config "pause-744869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:19:26.161942  444180 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:26.162027  444180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:26.177980  444180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0717 19:19:26.178409  444180 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:26.178963  444180 main.go:141] libmachine: Using API Version  1
	I0717 19:19:26.178987  444180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:26.179379  444180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:26.179575  444180 main.go:141] libmachine: (pause-744869) Calling .DriverName
	I0717 19:19:26.179833  444180 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:19:26.180134  444180 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:26.180174  444180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:26.195756  444180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I0717 19:19:26.196260  444180 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:26.196879  444180 main.go:141] libmachine: Using API Version  1
	I0717 19:19:26.196905  444180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:26.197231  444180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:26.197426  444180 main.go:141] libmachine: (pause-744869) Calling .DriverName
	I0717 19:19:26.234841  444180 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:19:26.236181  444180 start.go:297] selected driver: kvm2
	I0717 19:19:26.236209  444180 start.go:901] validating driver "kvm2" against &{Name:pause-744869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-744869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:19:26.236417  444180 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:19:26.236884  444180 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:19:26.236998  444180 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:19:26.254732  444180 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:19:26.255722  444180 cni.go:84] Creating CNI manager for ""
	I0717 19:19:26.255740  444180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:19:26.255810  444180 start.go:340] cluster config:
	{Name:pause-744869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-744869 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.34 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:19:26.256002  444180 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:19:23.329984  443206 pod_ready.go:102] pod "coredns-7db6d8ff4d-nrpfn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:25.833136  443206 pod_ready.go:102] pod "coredns-7db6d8ff4d-nrpfn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:26.257904  444180 out.go:177] * Starting "pause-744869" primary control-plane node in "pause-744869" cluster
	I0717 19:19:26.259400  444180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:19:26.259441  444180 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 19:19:26.259453  444180 cache.go:56] Caching tarball of preloaded images
	I0717 19:19:26.259530  444180 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:19:26.259543  444180 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 19:19:26.259699  444180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/pause-744869/config.json ...
	I0717 19:19:26.259944  444180 start.go:360] acquireMachinesLock for pause-744869: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:19:26.260007  444180 start.go:364] duration metric: took 37.722µs to acquireMachinesLock for "pause-744869"
	I0717 19:19:26.260028  444180 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:19:26.260034  444180 fix.go:54] fixHost starting: 
	I0717 19:19:26.260407  444180 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:26.260451  444180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:26.275888  444180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0717 19:19:26.276452  444180 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:26.277058  444180 main.go:141] libmachine: Using API Version  1
	I0717 19:19:26.277088  444180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:26.277495  444180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:26.277750  444180 main.go:141] libmachine: (pause-744869) Calling .DriverName
	I0717 19:19:26.277956  444180 main.go:141] libmachine: (pause-744869) Calling .GetState
	I0717 19:19:26.280000  444180 fix.go:112] recreateIfNeeded on pause-744869: state=Running err=<nil>
	W0717 19:19:26.280024  444180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:19:26.282953  444180 out.go:177] * Updating the running kvm2 "pause-744869" VM ...
	I0717 19:19:25.242143  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:25.242782  443595 api_server.go:269] stopped: https://192.168.39.49:8443/healthz: Get "https://192.168.39.49:8443/healthz": dial tcp 192.168.39.49:8443: connect: connection refused
	I0717 19:19:25.741311  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:27.782908  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:19:27.782948  443595 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:19:27.782983  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:27.824733  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:19:27.824769  443595 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:19:28.242272  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:28.250093  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:19:28.250126  443595 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:19:28.741502  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:28.750478  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:19:28.750513  443595 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:19:29.241667  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:29.252411  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:19:29.252442  443595 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:19:29.741665  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:29.746257  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0717 19:19:29.753605  443595 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:19:29.753637  443595 api_server.go:131] duration metric: took 24.012399169s to wait for apiserver health ...
	I0717 19:19:29.753655  443595 cni.go:84] Creating CNI manager for ""
	I0717 19:19:29.753664  443595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:19:29.755305  443595 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:19:29.756500  443595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:19:29.770444  443595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:19:29.799680  443595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:19:29.817898  443595 system_pods.go:59] 8 kube-system pods found
	I0717 19:19:29.817932  443595 system_pods.go:61] "coredns-5cfdc65f69-hd4pb" [93d456b6-0a2b-4a6c-adc2-26cb5d0bb450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:19:29.817939  443595 system_pods.go:61] "coredns-5cfdc65f69-n5wvp" [f2140781-a181-4287-b87d-803e431f2da4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:19:29.817945  443595 system_pods.go:61] "etcd-kubernetes-upgrade-442321" [0fc1fa0f-52b0-4e3e-95f5-6aeae5e96a4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:19:29.817951  443595 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-442321" [e37cd4de-4824-4525-938a-bc49b324878d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:19:29.817958  443595 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-442321" [95aad43b-eef8-4318-8d4e-f7b6d953dd6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:19:29.817962  443595 system_pods.go:61] "kube-proxy-7tnh7" [395043fd-99de-4099-901d-e51d5477ee2c] Running
	I0717 19:19:29.817968  443595 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-442321" [124746bc-9bfa-473e-a495-6bc3b93c613b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:19:29.817975  443595 system_pods.go:61] "storage-provisioner" [0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 19:19:29.817983  443595 system_pods.go:74] duration metric: took 18.275489ms to wait for pod list to return data ...
	I0717 19:19:29.817992  443595 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:19:29.828966  443595 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:19:29.829011  443595 node_conditions.go:123] node cpu capacity is 2
	I0717 19:19:29.829025  443595 node_conditions.go:105] duration metric: took 11.02631ms to run NodePressure ...
	I0717 19:19:29.829051  443595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:19:30.179419  443595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:19:30.196408  443595 ops.go:34] apiserver oom_adj: -16
	I0717 19:19:30.196437  443595 kubeadm.go:597] duration metric: took 41.311146805s to restartPrimaryControlPlane
	I0717 19:19:30.196450  443595 kubeadm.go:394] duration metric: took 41.481265983s to StartCluster
	I0717 19:19:30.196474  443595 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:19:30.196591  443595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:19:30.198067  443595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:19:30.198345  443595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:19:30.198492  443595 config.go:182] Loaded profile config "kubernetes-upgrade-442321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:19:30.198450  443595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:19:30.198528  443595 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-442321"
	I0717 19:19:30.198564  443595 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-442321"
	I0717 19:19:30.198580  443595 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-442321"
	W0717 19:19:30.198590  443595 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:19:30.198633  443595 host.go:66] Checking if "kubernetes-upgrade-442321" exists ...
	I0717 19:19:30.198598  443595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-442321"
	I0717 19:19:30.198996  443595 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:30.199028  443595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:30.199069  443595 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:30.199126  443595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:30.199850  443595 out.go:177] * Verifying Kubernetes components...
	I0717 19:19:30.201262  443595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:19:30.214553  443595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45915
	I0717 19:19:30.214737  443595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0717 19:19:30.215031  443595 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:30.215255  443595 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:30.215553  443595 main.go:141] libmachine: Using API Version  1
	I0717 19:19:30.215576  443595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:30.215769  443595 main.go:141] libmachine: Using API Version  1
	I0717 19:19:30.215790  443595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:30.215912  443595 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:30.216131  443595 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:30.216301  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetState
	I0717 19:19:30.216470  443595 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:30.216520  443595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:30.219327  443595 kapi.go:59] client config for kubernetes-upgrade-442321: &rest.Config{Host:"https://192.168.39.49:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.crt", KeyFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kubernetes-upgrade-442321/client.key", CAFile:"/home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:19:30.219652  443595 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-442321"
	W0717 19:19:30.219670  443595 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:19:30.219700  443595 host.go:66] Checking if "kubernetes-upgrade-442321" exists ...
	I0717 19:19:30.220017  443595 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:30.220052  443595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:30.232415  443595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40639
	I0717 19:19:30.233139  443595 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:30.233753  443595 main.go:141] libmachine: Using API Version  1
	I0717 19:19:30.233778  443595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:30.234111  443595 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:30.234329  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetState
	I0717 19:19:30.235425  443595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0717 19:19:30.235835  443595 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:30.235924  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:19:30.236402  443595 main.go:141] libmachine: Using API Version  1
	I0717 19:19:30.236419  443595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:30.236759  443595 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:30.237229  443595 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:19:30.237263  443595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:19:30.238101  443595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:19:26.284540  444180 machine.go:94] provisionDockerMachine start ...
	I0717 19:19:26.284618  444180 main.go:141] libmachine: (pause-744869) Calling .DriverName
	I0717 19:19:26.285046  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:26.288361  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.288882  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.288912  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.289096  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHPort
	I0717 19:19:26.289291  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.289489  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.289633  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHUsername
	I0717 19:19:26.289827  444180 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:26.290056  444180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0717 19:19:26.290068  444180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:19:26.401473  444180 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-744869
	
	I0717 19:19:26.401507  444180 main.go:141] libmachine: (pause-744869) Calling .GetMachineName
	I0717 19:19:26.401789  444180 buildroot.go:166] provisioning hostname "pause-744869"
	I0717 19:19:26.401825  444180 main.go:141] libmachine: (pause-744869) Calling .GetMachineName
	I0717 19:19:26.402056  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:26.405299  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.405770  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.405805  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.406038  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHPort
	I0717 19:19:26.406217  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.406394  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.406545  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHUsername
	I0717 19:19:26.406703  444180 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:26.406939  444180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0717 19:19:26.406956  444180 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-744869 && echo "pause-744869" | sudo tee /etc/hostname
	I0717 19:19:26.532445  444180 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-744869
	
	I0717 19:19:26.532480  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:26.535725  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.536146  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.536176  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.536406  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHPort
	I0717 19:19:26.536642  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.536842  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.537009  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHUsername
	I0717 19:19:26.537217  444180 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:26.537469  444180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0717 19:19:26.537496  444180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-744869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-744869/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-744869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:19:26.645805  444180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:19:26.645849  444180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:19:26.645910  444180 buildroot.go:174] setting up certificates
	I0717 19:19:26.645929  444180 provision.go:84] configureAuth start
	I0717 19:19:26.645948  444180 main.go:141] libmachine: (pause-744869) Calling .GetMachineName
	I0717 19:19:26.646284  444180 main.go:141] libmachine: (pause-744869) Calling .GetIP
	I0717 19:19:26.649530  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.650088  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.650114  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.650407  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:26.653345  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.653767  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.653798  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.653983  444180 provision.go:143] copyHostCerts
	I0717 19:19:26.654075  444180 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:19:26.654109  444180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:19:26.654208  444180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:19:26.654359  444180 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:19:26.654372  444180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:19:26.654410  444180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:19:26.654505  444180 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:19:26.654514  444180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:19:26.654542  444180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:19:26.654620  444180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.pause-744869 san=[127.0.0.1 192.168.50.34 localhost minikube pause-744869]
	I0717 19:19:26.843367  444180 provision.go:177] copyRemoteCerts
	I0717 19:19:26.843427  444180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:19:26.843453  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:26.846627  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.847061  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:26.847092  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:26.847336  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHPort
	I0717 19:19:26.847530  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:26.847680  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHUsername
	I0717 19:19:26.847832  444180 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/pause-744869/id_rsa Username:docker}
	I0717 19:19:26.932379  444180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0717 19:19:26.961230  444180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:19:26.991498  444180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:19:27.031833  444180 provision.go:87] duration metric: took 385.873513ms to configureAuth
	I0717 19:19:27.031865  444180 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:19:27.032171  444180 config.go:182] Loaded profile config "pause-744869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:19:27.032262  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHHostname
	I0717 19:19:27.035227  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:27.035627  444180 main.go:141] libmachine: (pause-744869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:fd:47", ip: ""} in network mk-pause-744869: {Iface:virbr2 ExpiryTime:2024-07-17 20:18:00 +0000 UTC Type:0 Mac:52:54:00:04:fd:47 Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:pause-744869 Clientid:01:52:54:00:04:fd:47}
	I0717 19:19:27.035665  444180 main.go:141] libmachine: (pause-744869) DBG | domain pause-744869 has defined IP address 192.168.50.34 and MAC address 52:54:00:04:fd:47 in network mk-pause-744869
	I0717 19:19:27.035861  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHPort
	I0717 19:19:27.036083  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:27.036270  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHKeyPath
	I0717 19:19:27.036417  444180 main.go:141] libmachine: (pause-744869) Calling .GetSSHUsername
	I0717 19:19:27.036609  444180 main.go:141] libmachine: Using SSH client type: native
	I0717 19:19:27.036854  444180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0717 19:19:27.036877  444180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:19:28.328906  443206 pod_ready.go:102] pod "coredns-7db6d8ff4d-nrpfn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:30.329518  443206 pod_ready.go:102] pod "coredns-7db6d8ff4d-nrpfn" in "kube-system" namespace has status "Ready":"False"
	I0717 19:19:30.239596  443595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:19:30.239611  443595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:19:30.239631  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:19:30.242359  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:19:30.242772  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:19:30.242789  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:19:30.242955  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:19:30.243131  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:19:30.243286  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:19:30.243424  443595 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:19:30.256366  443595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38149
	I0717 19:19:30.256865  443595 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:19:30.257347  443595 main.go:141] libmachine: Using API Version  1
	I0717 19:19:30.257521  443595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:19:30.257864  443595 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:19:30.258066  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetState
	I0717 19:19:30.259694  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .DriverName
	I0717 19:19:30.259908  443595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:19:30.259924  443595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:19:30.259943  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHHostname
	I0717 19:19:30.262763  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:19:30.263168  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8f:52", ip: ""} in network mk-kubernetes-upgrade-442321: {Iface:virbr1 ExpiryTime:2024-07-17 20:12:28 +0000 UTC Type:0 Mac:52:54:00:0a:8f:52 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:kubernetes-upgrade-442321 Clientid:01:52:54:00:0a:8f:52}
	I0717 19:19:30.263195  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | domain kubernetes-upgrade-442321 has defined IP address 192.168.39.49 and MAC address 52:54:00:0a:8f:52 in network mk-kubernetes-upgrade-442321
	I0717 19:19:30.263467  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHPort
	I0717 19:19:30.263637  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHKeyPath
	I0717 19:19:30.263786  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .GetSSHUsername
	I0717 19:19:30.263941  443595 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/kubernetes-upgrade-442321/id_rsa Username:docker}
	I0717 19:19:30.437191  443595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:19:30.459631  443595 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:19:30.459724  443595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:19:30.474012  443595 api_server.go:72] duration metric: took 275.620441ms to wait for apiserver process to appear ...
	I0717 19:19:30.474046  443595 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:19:30.474068  443595 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0717 19:19:30.479980  443595 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0717 19:19:30.481011  443595 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:19:30.481033  443595 api_server.go:131] duration metric: took 6.980696ms to wait for apiserver health ...
	I0717 19:19:30.481042  443595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:19:30.486569  443595 system_pods.go:59] 8 kube-system pods found
	I0717 19:19:30.486589  443595 system_pods.go:61] "coredns-5cfdc65f69-hd4pb" [93d456b6-0a2b-4a6c-adc2-26cb5d0bb450] Running
	I0717 19:19:30.486593  443595 system_pods.go:61] "coredns-5cfdc65f69-n5wvp" [f2140781-a181-4287-b87d-803e431f2da4] Running
	I0717 19:19:30.486599  443595 system_pods.go:61] "etcd-kubernetes-upgrade-442321" [0fc1fa0f-52b0-4e3e-95f5-6aeae5e96a4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:19:30.486607  443595 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-442321" [e37cd4de-4824-4525-938a-bc49b324878d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:19:30.486615  443595 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-442321" [95aad43b-eef8-4318-8d4e-f7b6d953dd6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:19:30.486619  443595 system_pods.go:61] "kube-proxy-7tnh7" [395043fd-99de-4099-901d-e51d5477ee2c] Running
	I0717 19:19:30.486625  443595 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-442321" [124746bc-9bfa-473e-a495-6bc3b93c613b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:19:30.486628  443595 system_pods.go:61] "storage-provisioner" [0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5] Running
	I0717 19:19:30.486634  443595 system_pods.go:74] duration metric: took 5.587375ms to wait for pod list to return data ...
	I0717 19:19:30.486644  443595 kubeadm.go:582] duration metric: took 288.259236ms to wait for: map[apiserver:true system_pods:true]
	I0717 19:19:30.486657  443595 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:19:30.492443  443595 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:19:30.492469  443595 node_conditions.go:123] node cpu capacity is 2
	I0717 19:19:30.492500  443595 node_conditions.go:105] duration metric: took 5.820916ms to run NodePressure ...
	I0717 19:19:30.492516  443595 start.go:241] waiting for startup goroutines ...
	I0717 19:19:30.523611  443595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:19:30.641439  443595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:19:30.716688  443595 main.go:141] libmachine: Making call to close driver server
	I0717 19:19:30.716722  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Close
	I0717 19:19:30.717047  443595 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:19:30.717069  443595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:19:30.717081  443595 main.go:141] libmachine: Making call to close driver server
	I0717 19:19:30.717082  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Closing plugin on server side
	I0717 19:19:30.717091  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Close
	I0717 19:19:30.717377  443595 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:19:30.717397  443595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:19:30.726805  443595 main.go:141] libmachine: Making call to close driver server
	I0717 19:19:30.726831  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Close
	I0717 19:19:30.727141  443595 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:19:30.727151  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Closing plugin on server side
	I0717 19:19:30.727166  443595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:19:31.372459  443595 main.go:141] libmachine: Making call to close driver server
	I0717 19:19:31.372502  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Close
	I0717 19:19:31.372951  443595 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:19:31.372972  443595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:19:31.372978  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) DBG | Closing plugin on server side
	I0717 19:19:31.372986  443595 main.go:141] libmachine: Making call to close driver server
	I0717 19:19:31.372995  443595 main.go:141] libmachine: (kubernetes-upgrade-442321) Calling .Close
	I0717 19:19:31.373284  443595 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:19:31.373299  443595 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:19:31.376243  443595 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 19:19:31.377611  443595 addons.go:510] duration metric: took 1.179161789s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 19:19:31.377651  443595 start.go:246] waiting for cluster config update ...
	I0717 19:19:31.377673  443595 start.go:255] writing updated cluster config ...
	I0717 19:19:31.377912  443595 ssh_runner.go:195] Run: rm -f paused
	I0717 19:19:31.430233  443595 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:19:31.432338  443595 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-442321" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.136734633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721243972136712683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7566e5c7-bbea-4370-b08f-b565246b5038 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.137361894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f76176a-f238-4d0d-a7db-61ac86ba5e6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.137438172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f76176a-f238-4d0d-a7db-61ac86ba5e6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.137769873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:259e0ec3c4ef65c750eb057c4e5330d04168c93cab5c9f34b9d8745aa06b5ce7,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969033779133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fae67388f84484b4f8ed1a527fb615f74495b9c2ae2b43f84b9ad8fd2153027,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969016353829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4872bdd09a95071979924ce0f41ad514e296a79fc059049ab9e43139ca91df4,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243968979640650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e8199e8866ca54f4788db61b5deb62c99c9225d31167a11345828505c453af,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721243964958091531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfed2ad8fa82a822cbcb0b50a07be0ca378729364bc2cce2c83e81f3455289,PodSandboxId:d0d5e8be7c0c4cc029269140d5af5dc5e51fc9866d369ac58974c41daec34eca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1721243964725111484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f965954fcbd51b93358196a425e372e9096414d50cd11a7a9e855c0ea7d7e,PodSandboxId:68939bb76581eede0765976acf3a735c8ce817415d0ff3203507ce7b4da8109e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243964749093635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3508bcfcb70208834a025227c19b17f5bea13730b14588d49952d190cb10ef7,PodSandboxId:338c2af0f3082b020ae4c7f4f08577aa7bbe0340c80bccb0f3353ec7d1632bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721243964718596382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172124394291
8919396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea3b149aaa699b37f2998bd4c4ea5a681fdc8d5a23c474ad3539c0390c3748,PodSandboxId:cbd64094a5c95e72d78f1ee8aa3be38a2219c44beb7f511674900f45509e2ece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721243941902397023,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721243940904340897,Labels:map[string]string{i
o.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928417445420,Labels:map[string]string{io.kubernetes.container.na
me: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928256156705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba41851495b1c7438f3b86ab492c6fb4fe2049b9060e3dcd5dbcb4b066136526,PodSandboxId:6ff3c526c5e7dbc4419410b8748a6bb5de3e92dfbdaf63c5e1f0cb23ec9992ef,M
etadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721243925077620224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787,PodSandboxId:49dce5cf76ffc1cb1f254f0435174f1825861329226c55f4a456d14bc34f3234,Metadata:&ContainerMetadata{Name:et
cd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721243925017410010,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d,PodSandboxId:b098ca947a63ff944692ce4105048b19108fe3766d81dd541de88e6b98eb9d65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Image
Spec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721243924986321504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2,PodSandboxId:701ec511e1da438fa838c617901f4e884994fb27294bd1e68a3f64f77cf50e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721243924727221443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f76176a-f238-4d0d-a7db-61ac86ba5e6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.194425166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65edcb4c-3e26-4f16-8fc4-e2c3b388e343 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.194552756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65edcb4c-3e26-4f16-8fc4-e2c3b388e343 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.196085932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a78d5639-70e0-4389-abaa-b565deb48d75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.196489140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721243972196465742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a78d5639-70e0-4389-abaa-b565deb48d75 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.197502165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39767f4c-c6c8-460c-8699-00eab070542a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.197625715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39767f4c-c6c8-460c-8699-00eab070542a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.198028456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:259e0ec3c4ef65c750eb057c4e5330d04168c93cab5c9f34b9d8745aa06b5ce7,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969033779133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fae67388f84484b4f8ed1a527fb615f74495b9c2ae2b43f84b9ad8fd2153027,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969016353829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4872bdd09a95071979924ce0f41ad514e296a79fc059049ab9e43139ca91df4,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243968979640650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e8199e8866ca54f4788db61b5deb62c99c9225d31167a11345828505c453af,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721243964958091531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfed2ad8fa82a822cbcb0b50a07be0ca378729364bc2cce2c83e81f3455289,PodSandboxId:d0d5e8be7c0c4cc029269140d5af5dc5e51fc9866d369ac58974c41daec34eca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1721243964725111484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f965954fcbd51b93358196a425e372e9096414d50cd11a7a9e855c0ea7d7e,PodSandboxId:68939bb76581eede0765976acf3a735c8ce817415d0ff3203507ce7b4da8109e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243964749093635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3508bcfcb70208834a025227c19b17f5bea13730b14588d49952d190cb10ef7,PodSandboxId:338c2af0f3082b020ae4c7f4f08577aa7bbe0340c80bccb0f3353ec7d1632bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721243964718596382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172124394291
8919396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea3b149aaa699b37f2998bd4c4ea5a681fdc8d5a23c474ad3539c0390c3748,PodSandboxId:cbd64094a5c95e72d78f1ee8aa3be38a2219c44beb7f511674900f45509e2ece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721243941902397023,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721243940904340897,Labels:map[string]string{i
o.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928417445420,Labels:map[string]string{io.kubernetes.container.na
me: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928256156705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba41851495b1c7438f3b86ab492c6fb4fe2049b9060e3dcd5dbcb4b066136526,PodSandboxId:6ff3c526c5e7dbc4419410b8748a6bb5de3e92dfbdaf63c5e1f0cb23ec9992ef,M
etadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721243925077620224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787,PodSandboxId:49dce5cf76ffc1cb1f254f0435174f1825861329226c55f4a456d14bc34f3234,Metadata:&ContainerMetadata{Name:et
cd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721243925017410010,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d,PodSandboxId:b098ca947a63ff944692ce4105048b19108fe3766d81dd541de88e6b98eb9d65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Image
Spec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721243924986321504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2,PodSandboxId:701ec511e1da438fa838c617901f4e884994fb27294bd1e68a3f64f77cf50e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721243924727221443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39767f4c-c6c8-460c-8699-00eab070542a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.252462357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c18d62d1-e8a9-4b15-b643-3f3f61b82627 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.252565916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c18d62d1-e8a9-4b15-b643-3f3f61b82627 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.254701420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178f6709-72b3-42f9-8fcf-831a5c9c5e13 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.256428655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721243972256391535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178f6709-72b3-42f9-8fcf-831a5c9c5e13 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.257059609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2da6ad7-6fd8-4c15-b9cc-3c25d3008c8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.257133113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2da6ad7-6fd8-4c15-b9cc-3c25d3008c8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.257494480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:259e0ec3c4ef65c750eb057c4e5330d04168c93cab5c9f34b9d8745aa06b5ce7,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969033779133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fae67388f84484b4f8ed1a527fb615f74495b9c2ae2b43f84b9ad8fd2153027,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969016353829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4872bdd09a95071979924ce0f41ad514e296a79fc059049ab9e43139ca91df4,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243968979640650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e8199e8866ca54f4788db61b5deb62c99c9225d31167a11345828505c453af,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721243964958091531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfed2ad8fa82a822cbcb0b50a07be0ca378729364bc2cce2c83e81f3455289,PodSandboxId:d0d5e8be7c0c4cc029269140d5af5dc5e51fc9866d369ac58974c41daec34eca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1721243964725111484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f965954fcbd51b93358196a425e372e9096414d50cd11a7a9e855c0ea7d7e,PodSandboxId:68939bb76581eede0765976acf3a735c8ce817415d0ff3203507ce7b4da8109e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243964749093635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3508bcfcb70208834a025227c19b17f5bea13730b14588d49952d190cb10ef7,PodSandboxId:338c2af0f3082b020ae4c7f4f08577aa7bbe0340c80bccb0f3353ec7d1632bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721243964718596382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172124394291
8919396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea3b149aaa699b37f2998bd4c4ea5a681fdc8d5a23c474ad3539c0390c3748,PodSandboxId:cbd64094a5c95e72d78f1ee8aa3be38a2219c44beb7f511674900f45509e2ece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721243941902397023,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721243940904340897,Labels:map[string]string{i
o.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928417445420,Labels:map[string]string{io.kubernetes.container.na
me: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928256156705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba41851495b1c7438f3b86ab492c6fb4fe2049b9060e3dcd5dbcb4b066136526,PodSandboxId:6ff3c526c5e7dbc4419410b8748a6bb5de3e92dfbdaf63c5e1f0cb23ec9992ef,M
etadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721243925077620224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787,PodSandboxId:49dce5cf76ffc1cb1f254f0435174f1825861329226c55f4a456d14bc34f3234,Metadata:&ContainerMetadata{Name:et
cd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721243925017410010,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d,PodSandboxId:b098ca947a63ff944692ce4105048b19108fe3766d81dd541de88e6b98eb9d65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Image
Spec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721243924986321504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2,PodSandboxId:701ec511e1da438fa838c617901f4e884994fb27294bd1e68a3f64f77cf50e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721243924727221443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2da6ad7-6fd8-4c15-b9cc-3c25d3008c8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.313311410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50707eb6-0acb-4e7d-9abf-2d7117cd4c7e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.313457406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50707eb6-0acb-4e7d-9abf-2d7117cd4c7e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.315085056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c449d312-a6dd-49dd-94da-2be0e5813714 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.315465987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721243972315443059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c449d312-a6dd-49dd-94da-2be0e5813714 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.316190514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebe03a4d-34b1-4a31-a0e1-c8e467afc283 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.316241253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebe03a4d-34b1-4a31-a0e1-c8e467afc283 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:19:32 kubernetes-upgrade-442321 crio[2942]: time="2024-07-17 19:19:32.316555235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:259e0ec3c4ef65c750eb057c4e5330d04168c93cab5c9f34b9d8745aa06b5ce7,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969033779133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fae67388f84484b4f8ed1a527fb615f74495b9c2ae2b43f84b9ad8fd2153027,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721243969016353829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4872bdd09a95071979924ce0f41ad514e296a79fc059049ab9e43139ca91df4,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243968979640650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2e8199e8866ca54f4788db61b5deb62c99c9225d31167a11345828505c453af,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNI
NG,CreatedAt:1721243964958091531,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccfed2ad8fa82a822cbcb0b50a07be0ca378729364bc2cce2c83e81f3455289,PodSandboxId:d0d5e8be7c0c4cc029269140d5af5dc5e51fc9866d369ac58974c41daec34eca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_R
UNNING,CreatedAt:1721243964725111484,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f965954fcbd51b93358196a425e372e9096414d50cd11a7a9e855c0ea7d7e,PodSandboxId:68939bb76581eede0765976acf3a735c8ce817415d0ff3203507ce7b4da8109e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Stat
e:CONTAINER_RUNNING,CreatedAt:1721243964749093635,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3508bcfcb70208834a025227c19b17f5bea13730b14588d49952d190cb10ef7,PodSandboxId:338c2af0f3082b020ae4c7f4f08577aa7bbe0340c80bccb0f3353ec7d1632bdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUN
NING,CreatedAt:1721243964718596382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e,PodSandboxId:76f6038e389271c5fea5d638fe48548214ae779d8a28ca650ff7706bce64824f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172124394291
8919396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce2ffd84eb57d6981253028192111fd6,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ea3b149aaa699b37f2998bd4c4ea5a681fdc8d5a23c474ad3539c0390c3748,PodSandboxId:cbd64094a5c95e72d78f1ee8aa3be38a2219c44beb7f511674900f45509e2ece,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721243941902397023,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c,PodSandboxId:58afb7cfc44c74549da33f07614223e75757cb62e0d3a897c91a85105c56fc2f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721243940904340897,Labels:map[string]string{i
o.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5,PodSandboxId:4b2dda8e9bb467feacdcc49a4d63919c78ccc9217c442aa864a3a4750950dbf9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928417445420,Labels:map[string]string{io.kubernetes.container.na
me: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hd4pb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d456b6-0a2b-4a6c-adc2-26cb5d0bb450,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405,PodSandboxId:94afd979438a0756cafe1b4b25bd16fcbce44aaf6ac5a403992ed514dee4f1d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Ru
ntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721243928256156705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-n5wvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2140781-a181-4287-b87d-803e431f2da4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba41851495b1c7438f3b86ab492c6fb4fe2049b9060e3dcd5dbcb4b066136526,PodSandboxId:6ff3c526c5e7dbc4419410b8748a6bb5de3e92dfbdaf63c5e1f0cb23ec9992ef,M
etadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721243925077620224,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7tnh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 395043fd-99de-4099-901d-e51d5477ee2c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787,PodSandboxId:49dce5cf76ffc1cb1f254f0435174f1825861329226c55f4a456d14bc34f3234,Metadata:&ContainerMetadata{Name:et
cd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721243925017410010,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26e4301961ca589606450f959dce6755,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d,PodSandboxId:b098ca947a63ff944692ce4105048b19108fe3766d81dd541de88e6b98eb9d65,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&Image
Spec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721243924986321504,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 602205e6e38bad26b7137f28c7026232,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2,PodSandboxId:701ec511e1da438fa838c617901f4e884994fb27294bd1e68a3f64f77cf50e35,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721243924727221443,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-442321,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc5f7fc80f59f85be018e45739e5fc11,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebe03a4d-34b1-4a31-a0e1-c8e467afc283 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	259e0ec3c4ef6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   4b2dda8e9bb46       coredns-5cfdc65f69-hd4pb
	1fae67388f844       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   94afd979438a0       coredns-5cfdc65f69-n5wvp
	d4872bdd09a95       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   58afb7cfc44c7       storage-provisioner
	e2e8199e8866c       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            3                   76f6038e38927       kube-apiserver-kubernetes-upgrade-442321
	225f965954fcb       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   68939bb76581e       kube-scheduler-kubernetes-upgrade-442321
	4ccfed2ad8fa8       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   d0d5e8be7c0c4       kube-controller-manager-kubernetes-upgrade-442321
	d3508bcfcb702       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   338c2af0f3082       etcd-kubernetes-upgrade-442321
	d1c349608e0a0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   29 seconds ago      Exited              kube-apiserver            2                   76f6038e38927       kube-apiserver-kubernetes-upgrade-442321
	71ea3b149aaa6       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   30 seconds ago      Running             kube-proxy                2                   cbd64094a5c95       kube-proxy-7tnh7
	38e35feb25da8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago      Exited              storage-provisioner       2                   58afb7cfc44c7       storage-provisioner
	d45d102cda968       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   44 seconds ago      Exited              coredns                   1                   4b2dda8e9bb46       coredns-5cfdc65f69-hd4pb
	7e7f1defc521f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   44 seconds ago      Exited              coredns                   1                   94afd979438a0       coredns-5cfdc65f69-n5wvp
	ba41851495b1c       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   47 seconds ago      Exited              kube-proxy                1                   6ff3c526c5e7d       kube-proxy-7tnh7
	eaf1f6b50d0d9       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   47 seconds ago      Exited              etcd                      1                   49dce5cf76ffc       etcd-kubernetes-upgrade-442321
	7453972e4c66b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   47 seconds ago      Exited              kube-scheduler            1                   b098ca947a63f       kube-scheduler-kubernetes-upgrade-442321
	4489f2d46b81c       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   47 seconds ago      Exited              kube-controller-manager   1                   701ec511e1da4       kube-controller-manager-kubernetes-upgrade-442321
	
	
	==> coredns [1fae67388f84484b4f8ed1a527fb615f74495b9c2ae2b43f84b9ad8fd2153027] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [259e0ec3c4ef65c750eb057c4e5330d04168c93cab5c9f34b9d8745aa06b5ce7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-442321
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-442321
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:17:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-442321
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:19:27 +0000   Wed, 17 Jul 2024 19:17:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:19:27 +0000   Wed, 17 Jul 2024 19:17:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:19:27 +0000   Wed, 17 Jul 2024 19:17:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:19:27 +0000   Wed, 17 Jul 2024 19:17:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    kubernetes-upgrade-442321
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c53ed9fbecb453f86d17fff44aa0cac
	  System UUID:                0c53ed9f-becb-453f-86d1-7fff44aa0cac
	  Boot ID:                    c9fcb9fa-c5f2-42fe-8fc7-6ad2dc288144
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-hd4pb                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 coredns-5cfdc65f69-n5wvp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 etcd-kubernetes-upgrade-442321                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-kubernetes-upgrade-442321             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-442321    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-7tnh7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-kubernetes-upgrade-442321             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 87s                 kube-proxy       
	  Normal  Starting                 4s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x8 over 100s)  kubelet          Node kubernetes-upgrade-442321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 100s)  kubelet          Node kubernetes-upgrade-442321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 100s)  kubelet          Node kubernetes-upgrade-442321 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                 node-controller  Node kubernetes-upgrade-442321 event: Registered Node kubernetes-upgrade-442321 in Controller
	  Normal  RegisteredNode           0s                  node-controller  Node kubernetes-upgrade-442321 event: Registered Node kubernetes-upgrade-442321 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.287905] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.055917] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058982] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.195884] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.126569] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.347422] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.374405] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +0.062448] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.031019] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[Jul17 19:18] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.083666] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.031278] kauditd_printk_skb: 107 callbacks suppressed
	[ +34.784100] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.174478] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.361931] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +0.453852] systemd-fstab-generator[2524]: Ignoring "noauto" option for root device
	[  +0.988231] systemd-fstab-generator[2889]: Ignoring "noauto" option for root device
	[  +1.552160] systemd-fstab-generator[3244]: Ignoring "noauto" option for root device
	[  +5.525290] kauditd_printk_skb: 300 callbacks suppressed
	[Jul17 19:19] systemd-fstab-generator[4175]: Ignoring "noauto" option for root device
	[  +0.088906] kauditd_printk_skb: 7 callbacks suppressed
	[ +18.434553] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.145929] kauditd_printk_skb: 13 callbacks suppressed
	[  +1.185206] systemd-fstab-generator[4656]: Ignoring "noauto" option for root device
	
	
	==> etcd [d3508bcfcb70208834a025227c19b17f5bea13730b14588d49952d190cb10ef7] <==
	{"level":"info","ts":"2024-07-17T19:19:25.056149Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T19:19:25.056647Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7f2a407b6bb4eb12","initial-advertise-peer-urls":["https://192.168.39.49:2380"],"listen-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.49:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T19:19:25.056674Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T19:19:25.056727Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2024-07-17T19:19:25.056733Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2024-07-17T19:19:25.063608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 switched to configuration voters=(9163207290670869266)"}
	{"level":"info","ts":"2024-07-17T19:19:25.063761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"28c39da372138ae1","local-member-id":"7f2a407b6bb4eb12","added-peer-id":"7f2a407b6bb4eb12","added-peer-peer-urls":["https://192.168.39.49:2380"]}
	{"level":"info","ts":"2024-07-17T19:19:25.06391Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"28c39da372138ae1","local-member-id":"7f2a407b6bb4eb12","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:19:25.063938Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:19:26.286956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T19:19:26.287019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T19:19:26.287056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 received MsgPreVoteResp from 7f2a407b6bb4eb12 at term 2"}
	{"level":"info","ts":"2024-07-17T19:19:26.287069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T19:19:26.287075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 received MsgVoteResp from 7f2a407b6bb4eb12 at term 3"}
	{"level":"info","ts":"2024-07-17T19:19:26.2871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T19:19:26.287124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7f2a407b6bb4eb12 elected leader 7f2a407b6bb4eb12 at term 3"}
	{"level":"info","ts":"2024-07-17T19:19:26.295208Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7f2a407b6bb4eb12","local-member-attributes":"{Name:kubernetes-upgrade-442321 ClientURLs:[https://192.168.39.49:2379]}","request-path":"/0/members/7f2a407b6bb4eb12/attributes","cluster-id":"28c39da372138ae1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T19:19:26.2954Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:19:26.295845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:19:26.299134Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T19:19:26.299257Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T19:19:26.299332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T19:19:26.300476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T19:19:26.30123Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T19:19:26.303794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.49:2379"}
	
	
	==> etcd [eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787] <==
	
	
	==> kernel <==
	 19:19:32 up 2 min,  0 users,  load average: 1.73, 0.71, 0.26
	Linux kubernetes-upgrade-442321 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e] <==
	I0717 19:19:03.105444       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:19:03.402272       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0717 19:19:03.404926       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:03.405018       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0717 19:19:03.419161       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 19:19:03.423982       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 19:19:03.424081       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 19:19:03.424469       1 instance.go:231] Using reconciler: lease
	W0717 19:19:03.425814       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:04.406021       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:04.406119       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:04.427244       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:05.795751       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:05.823640       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:06.299606       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:08.569524       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:08.797383       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:08.883531       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:12.708338       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:12.934939       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:13.138766       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:18.366859       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:19.899176       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 19:19:20.103531       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0717 19:19:23.426144       1 instance.go:224] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e2e8199e8866ca54f4788db61b5deb62c99c9225d31167a11345828505c453af] <==
	I0717 19:19:27.868258       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 19:19:27.869553       1 aggregator.go:171] initial CRD sync complete...
	I0717 19:19:27.869632       1 autoregister_controller.go:144] Starting autoregister controller
	I0717 19:19:27.869662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:19:27.889860       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 19:19:27.895021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 19:19:27.895929       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:19:27.900412       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 19:19:27.900413       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 19:19:27.901289       1 policy_source.go:224] refreshing policies
	I0717 19:19:27.900464       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0717 19:19:27.901521       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0717 19:19:27.903739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:19:27.932239       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0717 19:19:27.932579       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0717 19:19:27.946388       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 19:19:27.982565       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:19:28.700408       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:19:29.966313       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 19:19:29.991939       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 19:19:30.064340       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 19:19:30.127292       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:19:30.138357       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 19:19:32.075478       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 19:19:32.285578       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2] <==
	
	
	==> kube-controller-manager [4ccfed2ad8fa82a822cbcb0b50a07be0ca378729364bc2cce2c83e81f3455289] <==
	I0717 19:19:32.250107       1 shared_informer.go:320] Caches are synced for namespace
	I0717 19:19:32.262353       1 shared_informer.go:320] Caches are synced for TTL
	I0717 19:19:32.262586       1 shared_informer.go:320] Caches are synced for service account
	I0717 19:19:32.267053       1 shared_informer.go:320] Caches are synced for GC
	I0717 19:19:32.269376       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 19:19:32.269483       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-442321"
	I0717 19:19:32.289409       1 shared_informer.go:320] Caches are synced for node
	I0717 19:19:32.289462       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0717 19:19:32.289479       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0717 19:19:32.289484       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0717 19:19:32.289488       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0717 19:19:32.289574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-442321"
	I0717 19:19:32.290027       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 19:19:32.296985       1 shared_informer.go:320] Caches are synced for taint
	I0717 19:19:32.297215       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 19:19:32.297350       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-442321"
	I0717 19:19:32.297396       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 19:19:32.309175       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 19:19:32.324798       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 19:19:32.324934       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 19:19:32.353565       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 19:19:32.353614       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 19:19:32.365541       1 shared_informer.go:320] Caches are synced for HPA
	I0717 19:19:32.383388       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 19:19:32.392517       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [71ea3b149aaa699b37f2998bd4c4ea5a681fdc8d5a23c474ad3539c0390c3748] <==
	E0717 19:19:02.088451       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 19:19:02.090454       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-442321\": dial tcp 192.168.39.49:8443: connect: connection refused"
	E0717 19:19:13.161984       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-442321\": net/http: TLS handshake timeout"
	E0717 19:19:24.433437       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-442321\": dial tcp 192.168.39.49:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.49:53502->192.168.39.49:8443: read: connection reset by peer"
	I0717 19:19:28.636438       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E0717 19:19:28.636737       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 19:19:28.703211       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 19:19:28.703264       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:19:28.703295       1 server_linux.go:170] "Using iptables Proxier"
	I0717 19:19:28.705931       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 19:19:28.706261       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 19:19:28.706294       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:19:28.707837       1 config.go:197] "Starting service config controller"
	I0717 19:19:28.708040       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:19:28.708177       1 config.go:104] "Starting endpoint slice config controller"
	I0717 19:19:28.708206       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:19:28.708797       1 config.go:326] "Starting node config controller"
	I0717 19:19:28.708825       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:19:28.808763       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:19:28.808855       1 shared_informer.go:320] Caches are synced for node config
	I0717 19:19:28.808900       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [ba41851495b1c7438f3b86ab492c6fb4fe2049b9060e3dcd5dbcb4b066136526] <==
	
	
	==> kube-scheduler [225f965954fcbd51b93358196a425e372e9096414d50cd11a7a9e855c0ea7d7e] <==
	I0717 19:19:25.985584       1 serving.go:386] Generated self-signed cert in-memory
	W0717 19:19:27.765729       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:19:27.765854       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:19:27.765956       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:19:27.765981       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:19:27.832094       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 19:19:27.833958       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:19:27.838948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:19:27.838998       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:19:27.839261       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 19:19:27.839348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 19:19:27.940921       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d] <==
	
	
	==> kubelet <==
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:24.434679    4182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-442321&limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.49:53552->192.168.39.49:8443: read: connection reset by peer" logger="UnhandledError"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: W0717 19:19:24.434734    4182 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.49:53564->192.168.39.49:8443: read: connection reset by peer
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:24.434761    4182 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.49:53564->192.168.39.49:8443: read: connection reset by peer" logger="UnhandledError"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:24.637259    4182 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-442321"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:24.638278    4182 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.49:8443: connect: connection refused" node="kubernetes-upgrade-442321"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:24.697744    4182 scope.go:117] "RemoveContainer" containerID="eaf1f6b50d0d9baa045a1df47d1ae0647f44a8457f66a294eb76e6374a6a2787"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:24.700497    4182 scope.go:117] "RemoveContainer" containerID="4489f2d46b81c8b941d99a37fb1727868d2dd3559f49f04f15589d2714a8f8b2"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:24.715183    4182 scope.go:117] "RemoveContainer" containerID="7453972e4c66b6be4f7dd45fd8048f6351b543a3bfa7abecf15b845abd36130d"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:24.833340    4182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-442321?timeout=10s\": dial tcp 192.168.39.49:8443: connect: connection refused" interval="800ms"
	Jul 17 19:19:24 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:24.925682    4182 scope.go:117] "RemoveContainer" containerID="d1c349608e0a00810cc7f96a03068f1376d6245c47fc407d0cb39e1966d2489e"
	Jul 17 19:19:25 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:25.778054    4182 eviction_manager.go:283] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-442321\" not found"
	Jul 17 19:19:26 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:26.240584    4182 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-442321"
	Jul 17 19:19:27 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:27.957200    4182 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-442321"
	Jul 17 19:19:27 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:27.959101    4182 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-442321"
	Jul 17 19:19:27 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:27.959193    4182 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 19:19:27 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:27.961327    4182 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: E0717 19:19:28.000431    4182 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-442321\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-442321"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.649214    4182 apiserver.go:52] "Watching apiserver"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.756496    4182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.823328    4182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5-tmp\") pod \"storage-provisioner\" (UID: \"0ae902c2-0d30-4aa7-8eee-40f0f3fd46e5\") " pod="kube-system/storage-provisioner"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.824141    4182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/395043fd-99de-4099-901d-e51d5477ee2c-lib-modules\") pod \"kube-proxy-7tnh7\" (UID: \"395043fd-99de-4099-901d-e51d5477ee2c\") " pod="kube-system/kube-proxy-7tnh7"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.824557    4182 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/395043fd-99de-4099-901d-e51d5477ee2c-xtables-lock\") pod \"kube-proxy-7tnh7\" (UID: \"395043fd-99de-4099-901d-e51d5477ee2c\") " pod="kube-system/kube-proxy-7tnh7"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.961027    4182 scope.go:117] "RemoveContainer" containerID="38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.961584    4182 scope.go:117] "RemoveContainer" containerID="d45d102cda968ac2d608db7f187bf665c00d34b4badd53636021bc0b97e3aad5"
	Jul 17 19:19:28 kubernetes-upgrade-442321 kubelet[4182]: I0717 19:19:28.961738    4182 scope.go:117] "RemoveContainer" containerID="7e7f1defc521f1db3761c9b9291af09b12130304cee7c7f52b762189ae0b4405"
	
	
	==> storage-provisioner [38e35feb25da8e2330eb8982985a1cdd613a9c31af6b1e434f971328bb388f4c] <==
	I0717 19:19:00.976658       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:19:00.978614       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d4872bdd09a95071979924ce0f41ad514e296a79fc059049ab9e43139ca91df4] <==
	I0717 19:19:29.166432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:19:29.190411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:19:29.190493       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-442321 -n kubernetes-upgrade-442321
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-442321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-442321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-442321
--- FAIL: TestKubernetesUpgrade (440.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (280.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m40.409423802s)

                                                
                                                
-- stdout --
	* [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:22:30.131847  452176 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:22:30.131964  452176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:22:30.131970  452176 out.go:304] Setting ErrFile to fd 2...
	I0717 19:22:30.131976  452176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:22:30.132184  452176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:22:30.132901  452176 out.go:298] Setting JSON to false
	I0717 19:22:30.134146  452176 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11093,"bootTime":1721233057,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:22:30.134213  452176 start.go:139] virtualization: kvm guest
	I0717 19:22:30.136613  452176 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:22:30.138097  452176 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:22:30.138131  452176 notify.go:220] Checking for updates...
	I0717 19:22:30.140664  452176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:22:30.141872  452176 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:22:30.142948  452176 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:22:30.144091  452176 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:22:30.145326  452176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:22:30.147164  452176 config.go:182] Loaded profile config "bridge-369638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:22:30.147269  452176 config.go:182] Loaded profile config "enable-default-cni-369638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:22:30.147378  452176 config.go:182] Loaded profile config "flannel-369638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:22:30.147492  452176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:22:30.185736  452176 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:22:30.186990  452176 start.go:297] selected driver: kvm2
	I0717 19:22:30.187017  452176 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:22:30.187031  452176 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:22:30.187799  452176 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:22:30.187889  452176 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:22:30.203367  452176 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:22:30.203417  452176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 19:22:30.203644  452176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:22:30.203722  452176 cni.go:84] Creating CNI manager for ""
	I0717 19:22:30.203736  452176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:22:30.203748  452176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:22:30.203846  452176 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:22:30.203979  452176 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:22:30.205800  452176 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:22:30.207076  452176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:22:30.207123  452176 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:22:30.207148  452176 cache.go:56] Caching tarball of preloaded images
	I0717 19:22:30.207232  452176 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:22:30.207246  452176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:22:30.207341  452176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:22:30.207366  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json: {Name:mka89d899845d95b92a8dda779d89552c952248b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:22:30.207513  452176 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:22:37.166894  452176 start.go:364] duration metric: took 6.959352108s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:22:37.166978  452176 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:22:37.167141  452176 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:22:37.169629  452176 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:22:37.169865  452176 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:22:37.169926  452176 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:22:37.190712  452176 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0717 19:22:37.191365  452176 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:22:37.191942  452176 main.go:141] libmachine: Using API Version  1
	I0717 19:22:37.191966  452176 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:22:37.192346  452176 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:22:37.192574  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:22:37.192786  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:22:37.192991  452176 start.go:159] libmachine.API.Create for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:22:37.193035  452176 client.go:168] LocalClient.Create starting
	I0717 19:22:37.193075  452176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 19:22:37.193118  452176 main.go:141] libmachine: Decoding PEM data...
	I0717 19:22:37.193143  452176 main.go:141] libmachine: Parsing certificate...
	I0717 19:22:37.193217  452176 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 19:22:37.193246  452176 main.go:141] libmachine: Decoding PEM data...
	I0717 19:22:37.193262  452176 main.go:141] libmachine: Parsing certificate...
	I0717 19:22:37.193283  452176 main.go:141] libmachine: Running pre-create checks...
	I0717 19:22:37.193303  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .PreCreateCheck
	I0717 19:22:37.193711  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:22:37.194155  452176 main.go:141] libmachine: Creating machine...
	I0717 19:22:37.194170  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .Create
	I0717 19:22:37.194322  452176 main.go:141] libmachine: (old-k8s-version-998147) Creating KVM machine...
	I0717 19:22:37.195506  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found existing default KVM network
	I0717 19:22:37.197098  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.196952  452252 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5f:78:7d} reservation:<nil>}
	I0717 19:22:37.198090  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.198024  452252 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:37:f0:6a} reservation:<nil>}
	I0717 19:22:37.198927  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.198846  452252 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cf:f2:9e} reservation:<nil>}
	I0717 19:22:37.200027  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.199932  452252 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030f200}
	I0717 19:22:37.200056  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | created network xml: 
	I0717 19:22:37.200066  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | <network>
	I0717 19:22:37.200074  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   <name>mk-old-k8s-version-998147</name>
	I0717 19:22:37.200106  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   <dns enable='no'/>
	I0717 19:22:37.200130  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   
	I0717 19:22:37.200156  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 19:22:37.200168  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |     <dhcp>
	I0717 19:22:37.200184  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 19:22:37.200194  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |     </dhcp>
	I0717 19:22:37.200204  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   </ip>
	I0717 19:22:37.200215  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG |   
	I0717 19:22:37.200225  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | </network>
	I0717 19:22:37.200236  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | 
	I0717 19:22:37.205798  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | trying to create private KVM network mk-old-k8s-version-998147 192.168.72.0/24...
	I0717 19:22:37.282941  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | private KVM network mk-old-k8s-version-998147 192.168.72.0/24 created
	I0717 19:22:37.282976  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.282917  452252 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:22:37.282990  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147 ...
	I0717 19:22:37.283007  452176 main.go:141] libmachine: (old-k8s-version-998147) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:22:37.283068  452176 main.go:141] libmachine: (old-k8s-version-998147) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:22:37.552441  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.552313  452252 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa...
	I0717 19:22:37.705311  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.705183  452252 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/old-k8s-version-998147.rawdisk...
	I0717 19:22:37.705345  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Writing magic tar header
	I0717 19:22:37.705362  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Writing SSH key tar header
	I0717 19:22:37.705376  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:37.705322  452252 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147 ...
	I0717 19:22:37.705486  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147
	I0717 19:22:37.705561  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147 (perms=drwx------)
	I0717 19:22:37.705587  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:22:37.705598  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 19:22:37.705612  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:22:37.705625  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 19:22:37.705636  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 19:22:37.705652  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:22:37.705665  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:22:37.705679  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 19:22:37.705696  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:22:37.705718  452176 main.go:141] libmachine: (old-k8s-version-998147) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:22:37.705732  452176 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:22:37.705742  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Checking permissions on dir: /home
	I0717 19:22:37.705756  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Skipping /home - not owner
	I0717 19:22:37.707136  452176 main.go:141] libmachine: (old-k8s-version-998147) define libvirt domain using xml: 
	I0717 19:22:37.707163  452176 main.go:141] libmachine: (old-k8s-version-998147) <domain type='kvm'>
	I0717 19:22:37.707171  452176 main.go:141] libmachine: (old-k8s-version-998147)   <name>old-k8s-version-998147</name>
	I0717 19:22:37.707176  452176 main.go:141] libmachine: (old-k8s-version-998147)   <memory unit='MiB'>2200</memory>
	I0717 19:22:37.707197  452176 main.go:141] libmachine: (old-k8s-version-998147)   <vcpu>2</vcpu>
	I0717 19:22:37.707209  452176 main.go:141] libmachine: (old-k8s-version-998147)   <features>
	I0717 19:22:37.707219  452176 main.go:141] libmachine: (old-k8s-version-998147)     <acpi/>
	I0717 19:22:37.707224  452176 main.go:141] libmachine: (old-k8s-version-998147)     <apic/>
	I0717 19:22:37.707229  452176 main.go:141] libmachine: (old-k8s-version-998147)     <pae/>
	I0717 19:22:37.707233  452176 main.go:141] libmachine: (old-k8s-version-998147)     
	I0717 19:22:37.707239  452176 main.go:141] libmachine: (old-k8s-version-998147)   </features>
	I0717 19:22:37.707244  452176 main.go:141] libmachine: (old-k8s-version-998147)   <cpu mode='host-passthrough'>
	I0717 19:22:37.707249  452176 main.go:141] libmachine: (old-k8s-version-998147)   
	I0717 19:22:37.707254  452176 main.go:141] libmachine: (old-k8s-version-998147)   </cpu>
	I0717 19:22:37.707264  452176 main.go:141] libmachine: (old-k8s-version-998147)   <os>
	I0717 19:22:37.707272  452176 main.go:141] libmachine: (old-k8s-version-998147)     <type>hvm</type>
	I0717 19:22:37.707277  452176 main.go:141] libmachine: (old-k8s-version-998147)     <boot dev='cdrom'/>
	I0717 19:22:37.707282  452176 main.go:141] libmachine: (old-k8s-version-998147)     <boot dev='hd'/>
	I0717 19:22:37.707289  452176 main.go:141] libmachine: (old-k8s-version-998147)     <bootmenu enable='no'/>
	I0717 19:22:37.707293  452176 main.go:141] libmachine: (old-k8s-version-998147)   </os>
	I0717 19:22:37.707330  452176 main.go:141] libmachine: (old-k8s-version-998147)   <devices>
	I0717 19:22:37.707357  452176 main.go:141] libmachine: (old-k8s-version-998147)     <disk type='file' device='cdrom'>
	I0717 19:22:37.707390  452176 main.go:141] libmachine: (old-k8s-version-998147)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/boot2docker.iso'/>
	I0717 19:22:37.707414  452176 main.go:141] libmachine: (old-k8s-version-998147)       <target dev='hdc' bus='scsi'/>
	I0717 19:22:37.707424  452176 main.go:141] libmachine: (old-k8s-version-998147)       <readonly/>
	I0717 19:22:37.707438  452176 main.go:141] libmachine: (old-k8s-version-998147)     </disk>
	I0717 19:22:37.707451  452176 main.go:141] libmachine: (old-k8s-version-998147)     <disk type='file' device='disk'>
	I0717 19:22:37.707463  452176 main.go:141] libmachine: (old-k8s-version-998147)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:22:37.707481  452176 main.go:141] libmachine: (old-k8s-version-998147)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/old-k8s-version-998147.rawdisk'/>
	I0717 19:22:37.707491  452176 main.go:141] libmachine: (old-k8s-version-998147)       <target dev='hda' bus='virtio'/>
	I0717 19:22:37.707499  452176 main.go:141] libmachine: (old-k8s-version-998147)     </disk>
	I0717 19:22:37.707510  452176 main.go:141] libmachine: (old-k8s-version-998147)     <interface type='network'>
	I0717 19:22:37.707525  452176 main.go:141] libmachine: (old-k8s-version-998147)       <source network='mk-old-k8s-version-998147'/>
	I0717 19:22:37.707540  452176 main.go:141] libmachine: (old-k8s-version-998147)       <model type='virtio'/>
	I0717 19:22:37.707548  452176 main.go:141] libmachine: (old-k8s-version-998147)     </interface>
	I0717 19:22:37.707553  452176 main.go:141] libmachine: (old-k8s-version-998147)     <interface type='network'>
	I0717 19:22:37.707562  452176 main.go:141] libmachine: (old-k8s-version-998147)       <source network='default'/>
	I0717 19:22:37.707573  452176 main.go:141] libmachine: (old-k8s-version-998147)       <model type='virtio'/>
	I0717 19:22:37.707583  452176 main.go:141] libmachine: (old-k8s-version-998147)     </interface>
	I0717 19:22:37.707590  452176 main.go:141] libmachine: (old-k8s-version-998147)     <serial type='pty'>
	I0717 19:22:37.707602  452176 main.go:141] libmachine: (old-k8s-version-998147)       <target port='0'/>
	I0717 19:22:37.707609  452176 main.go:141] libmachine: (old-k8s-version-998147)     </serial>
	I0717 19:22:37.707618  452176 main.go:141] libmachine: (old-k8s-version-998147)     <console type='pty'>
	I0717 19:22:37.707628  452176 main.go:141] libmachine: (old-k8s-version-998147)       <target type='serial' port='0'/>
	I0717 19:22:37.707637  452176 main.go:141] libmachine: (old-k8s-version-998147)     </console>
	I0717 19:22:37.707648  452176 main.go:141] libmachine: (old-k8s-version-998147)     <rng model='virtio'>
	I0717 19:22:37.707657  452176 main.go:141] libmachine: (old-k8s-version-998147)       <backend model='random'>/dev/random</backend>
	I0717 19:22:37.707668  452176 main.go:141] libmachine: (old-k8s-version-998147)     </rng>
	I0717 19:22:37.707678  452176 main.go:141] libmachine: (old-k8s-version-998147)     
	I0717 19:22:37.707685  452176 main.go:141] libmachine: (old-k8s-version-998147)     
	I0717 19:22:37.707718  452176 main.go:141] libmachine: (old-k8s-version-998147)   </devices>
	I0717 19:22:37.707737  452176 main.go:141] libmachine: (old-k8s-version-998147) </domain>
	I0717 19:22:37.707751  452176 main.go:141] libmachine: (old-k8s-version-998147) 
	I0717 19:22:37.712114  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:73:51:2e in network default
	I0717 19:22:37.712780  452176 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:22:37.712803  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:37.713548  452176 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:22:37.713921  452176 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:22:37.714538  452176 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:22:37.715294  452176 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:22:39.207393  452176 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:22:39.208543  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:39.209127  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:39.209158  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:39.209080  452252 retry.go:31] will retry after 215.181531ms: waiting for machine to come up
	I0717 19:22:39.425686  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:39.426316  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:39.426348  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:39.426235  452252 retry.go:31] will retry after 312.174885ms: waiting for machine to come up
	I0717 19:22:39.739878  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:39.740542  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:39.740574  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:39.740464  452252 retry.go:31] will retry after 406.538949ms: waiting for machine to come up
	I0717 19:22:40.149292  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:40.150006  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:40.150037  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:40.149959  452252 retry.go:31] will retry after 542.191773ms: waiting for machine to come up
	I0717 19:22:40.693562  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:40.694229  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:40.694263  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:40.694165  452252 retry.go:31] will retry after 762.505438ms: waiting for machine to come up
	I0717 19:22:41.459415  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:41.461277  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:41.461317  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:41.461101  452252 retry.go:31] will retry after 936.160952ms: waiting for machine to come up
	I0717 19:22:42.399252  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:42.400197  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:42.400224  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:42.400098  452252 retry.go:31] will retry after 1.107864362s: waiting for machine to come up
	I0717 19:22:43.510001  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:43.510686  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:43.510710  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:43.510595  452252 retry.go:31] will retry after 1.323512525s: waiting for machine to come up
	I0717 19:22:44.835514  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:44.836138  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:44.836189  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:44.836084  452252 retry.go:31] will retry after 1.720820984s: waiting for machine to come up
	I0717 19:22:46.558783  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:46.559326  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:46.559358  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:46.559264  452252 retry.go:31] will retry after 1.610937211s: waiting for machine to come up
	I0717 19:22:48.171919  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:48.172512  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:48.172544  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:48.172402  452252 retry.go:31] will retry after 2.825549899s: waiting for machine to come up
	I0717 19:22:50.999678  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:51.000214  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:51.000263  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:51.000150  452252 retry.go:31] will retry after 2.855900579s: waiting for machine to come up
	I0717 19:22:53.858082  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:53.858660  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:53.858688  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:53.858605  452252 retry.go:31] will retry after 3.518834652s: waiting for machine to come up
	I0717 19:22:57.379115  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:22:57.379568  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:22:57.379616  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:22:57.379521  452252 retry.go:31] will retry after 4.898810274s: waiting for machine to come up
	I0717 19:23:02.280399  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.281081  452176 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:23:02.281101  452176 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:23:02.281132  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.281477  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147
	I0717 19:23:02.365873  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:23:02.365910  452176 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:23:02.365924  452176 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:23:02.369205  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.369592  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.369623  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.369745  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:23:02.369777  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:23:02.369820  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:23:02.369829  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:23:02.369842  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:23:02.492935  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:23:02.493232  452176 main.go:141] libmachine: (old-k8s-version-998147) KVM machine creation complete!
	I0717 19:23:02.493544  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:23:02.494225  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:02.494488  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:02.494695  452176 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:23:02.494715  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:23:02.496351  452176 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:23:02.496368  452176 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:23:02.496377  452176 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:23:02.496386  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:02.498999  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.499363  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.499410  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.499531  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:02.499740  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.499957  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.500176  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:02.500354  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:02.500620  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:02.500637  452176 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:23:02.612174  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:23:02.612215  452176 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:23:02.612228  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:02.615355  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.615707  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.615742  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.615820  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:02.616018  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.616215  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.616345  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:02.616542  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:02.616774  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:02.616788  452176 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:23:02.726003  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 19:23:02.726092  452176 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:23:02.726103  452176 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:23:02.726115  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:23:02.726426  452176 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:23:02.726454  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:23:02.726645  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:02.729999  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.730359  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.730389  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.730595  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:02.730786  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.730961  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.731106  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:02.731262  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:02.731414  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:02.731422  452176 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:23:02.859875  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:23:02.859915  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:02.863129  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.863555  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.863594  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.863796  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:02.864057  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.864259  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:02.864438  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:02.864682  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:02.864887  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:02.864913  452176 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:23:02.980208  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:23:02.980243  452176 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:23:02.980289  452176 buildroot.go:174] setting up certificates
	I0717 19:23:02.980304  452176 provision.go:84] configureAuth start
	I0717 19:23:02.980322  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:23:02.980664  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:23:02.983463  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.983875  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.983906  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.984034  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:02.987137  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.987499  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:02.987528  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:02.987709  452176 provision.go:143] copyHostCerts
	I0717 19:23:02.987780  452176 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:23:02.987793  452176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:23:02.987887  452176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:23:02.988006  452176 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:23:02.988017  452176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:23:02.988054  452176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:23:02.988126  452176 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:23:02.988137  452176 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:23:02.988168  452176 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:23:02.988319  452176 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:23:03.159241  452176 provision.go:177] copyRemoteCerts
	I0717 19:23:03.159309  452176 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:23:03.159341  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.163074  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.163490  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.163525  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.163753  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.163927  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.164022  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.164188  452176 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:23:03.251519  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:23:03.281767  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:23:03.308820  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:23:03.337927  452176 provision.go:87] duration metric: took 357.576357ms to configureAuth
	I0717 19:23:03.337958  452176 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:23:03.338167  452176 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:23:03.338281  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.341386  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.341760  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.341797  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.341939  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.342194  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.342385  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.342519  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.342680  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:03.342871  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:03.342889  452176 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:23:03.639325  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:23:03.639360  452176 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:23:03.639372  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetURL
	I0717 19:23:03.640972  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using libvirt version 6000000
	I0717 19:23:03.643621  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.644032  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.644059  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.644263  452176 main.go:141] libmachine: Docker is up and running!
	I0717 19:23:03.644276  452176 main.go:141] libmachine: Reticulating splines...
	I0717 19:23:03.644285  452176 client.go:171] duration metric: took 26.45123827s to LocalClient.Create
	I0717 19:23:03.644312  452176 start.go:167] duration metric: took 26.451322363s to libmachine.API.Create "old-k8s-version-998147"
	I0717 19:23:03.644341  452176 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:23:03.644357  452176 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:23:03.644381  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:03.644643  452176 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:23:03.644671  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.647443  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.647868  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.647900  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.648070  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.648239  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.648401  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.648575  452176 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:23:03.734941  452176 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:23:03.740647  452176 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:23:03.740670  452176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:23:03.740721  452176 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:23:03.740829  452176 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:23:03.740948  452176 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:23:03.758577  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:23:03.790798  452176 start.go:296] duration metric: took 146.44101ms for postStartSetup
	I0717 19:23:03.790849  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:23:03.791514  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:23:03.794750  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.795302  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.795342  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.795623  452176 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:23:03.795834  452176 start.go:128] duration metric: took 26.628680905s to createHost
	I0717 19:23:03.795854  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.798743  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.799114  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.799145  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.799335  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.799471  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.799650  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.799761  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.799923  452176 main.go:141] libmachine: Using SSH client type: native
	I0717 19:23:03.800154  452176 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:23:03.800176  452176 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:23:03.910096  452176 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244183.893734266
	
	I0717 19:23:03.910121  452176 fix.go:216] guest clock: 1721244183.893734266
	I0717 19:23:03.910128  452176 fix.go:229] Guest: 2024-07-17 19:23:03.893734266 +0000 UTC Remote: 2024-07-17 19:23:03.795842689 +0000 UTC m=+33.702255388 (delta=97.891577ms)
	I0717 19:23:03.910150  452176 fix.go:200] guest clock delta is within tolerance: 97.891577ms
	I0717 19:23:03.910158  452176 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 26.743223072s
	I0717 19:23:03.910184  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:03.910503  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:23:03.913662  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.914118  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.914153  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.914328  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:03.914952  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:03.915170  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:23:03.915274  452176 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:23:03.915322  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.915456  452176 ssh_runner.go:195] Run: cat /version.json
	I0717 19:23:03.915485  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:23:03.918082  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.918458  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.918491  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.918510  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.918655  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.918821  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.918991  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.919014  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:03.919045  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:03.919193  452176 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:23:03.919208  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:23:03.919403  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:23:03.919550  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:23:03.919696  452176 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:23:04.020616  452176 ssh_runner.go:195] Run: systemctl --version
	I0717 19:23:04.027708  452176 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:23:04.189505  452176 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:23:04.196099  452176 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:23:04.196166  452176 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:23:04.214020  452176 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:23:04.214046  452176 start.go:495] detecting cgroup driver to use...
	I0717 19:23:04.214113  452176 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:23:04.233222  452176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:23:04.250054  452176 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:23:04.250129  452176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:23:04.265298  452176 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:23:04.283580  452176 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:23:04.418804  452176 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:23:04.606253  452176 docker.go:233] disabling docker service ...
	I0717 19:23:04.606319  452176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:23:04.624291  452176 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:23:04.644351  452176 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:23:04.812274  452176 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:23:04.962920  452176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:23:04.977994  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:23:04.998453  452176 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:23:04.998508  452176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:05.009655  452176 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:23:05.009742  452176 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:05.020364  452176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:05.032069  452176 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:23:05.043288  452176 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:23:05.055454  452176 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:23:05.065892  452176 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:23:05.065961  452176 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:23:05.080165  452176 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:23:05.091013  452176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:23:05.232976  452176 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:23:05.395652  452176 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:23:05.395733  452176 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:23:05.401188  452176 start.go:563] Will wait 60s for crictl version
	I0717 19:23:05.401245  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:05.405198  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:23:05.452388  452176 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:23:05.452515  452176 ssh_runner.go:195] Run: crio --version
	I0717 19:23:05.486588  452176 ssh_runner.go:195] Run: crio --version
	I0717 19:23:05.522263  452176 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:23:05.523535  452176 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:23:05.528192  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:05.528547  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:23:05.528608  452176 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:23:05.528811  452176 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:23:05.533041  452176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:23:05.547727  452176 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:23:05.547864  452176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:23:05.547934  452176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:23:05.593936  452176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:23:05.594003  452176 ssh_runner.go:195] Run: which lz4
	I0717 19:23:05.598261  452176 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 19:23:05.602886  452176 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:23:05.602919  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:23:07.516842  452176 crio.go:462] duration metric: took 1.918606291s to copy over tarball
	I0717 19:23:07.516909  452176 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:23:10.828986  452176 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.312039341s)
	I0717 19:23:10.829025  452176 crio.go:469] duration metric: took 3.312151422s to extract the tarball
	I0717 19:23:10.829036  452176 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:23:10.880965  452176 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:23:10.967624  452176 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:23:10.967651  452176 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:23:10.967712  452176 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:23:10.967985  452176 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:23:10.968006  452176 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:23:10.968124  452176 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:23:10.968208  452176 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:23:10.968223  452176 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:23:10.967987  452176 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:23:10.968360  452176 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:23:10.970710  452176 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:23:10.970731  452176 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:23:10.970796  452176 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:23:10.970828  452176 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:23:10.970969  452176 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:23:10.971106  452176 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:23:10.971453  452176 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:23:10.972287  452176 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:23:11.118394  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:23:11.118565  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:23:11.144818  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:23:11.154178  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:23:11.162455  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:23:11.200524  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:23:11.240721  452176 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:23:11.240779  452176 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:23:11.240823  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.254765  452176 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:23:11.254814  452176 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:23:11.254864  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.317691  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:23:11.353690  452176 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:23:11.353751  452176 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:23:11.353805  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.353925  452176 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:23:11.353954  452176 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:23:11.353993  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.360860  452176 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:23:11.360903  452176 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:23:11.360915  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:23:11.360931  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.360934  452176 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:23:11.360965  452176 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:23:11.360994  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:23:11.360996  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.453059  452176 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:23:11.453111  452176 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:23:11.453161  452176 ssh_runner.go:195] Run: which crictl
	I0717 19:23:11.453254  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:23:11.453318  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:23:11.456790  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:23:11.456886  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:23:11.457011  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:23:11.457063  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:23:11.580303  452176 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:23:11.580444  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:23:11.580667  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:23:11.581177  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:23:11.581239  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:23:11.616880  452176 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:23:11.862454  452176 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:23:12.010998  452176 cache_images.go:92] duration metric: took 1.043318926s to LoadCachedImages
	W0717 19:23:12.011099  452176 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 19:23:12.011116  452176 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:23:12.011242  452176 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:23:12.011329  452176 ssh_runner.go:195] Run: crio config
	I0717 19:23:12.081477  452176 cni.go:84] Creating CNI manager for ""
	I0717 19:23:12.081505  452176 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:23:12.081524  452176 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:23:12.081558  452176 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:23:12.081811  452176 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:23:12.081883  452176 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:23:12.096307  452176 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:23:12.096402  452176 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:23:12.110407  452176 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:23:12.133777  452176 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:23:12.164222  452176 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:23:12.187983  452176 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:23:12.193571  452176 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:23:12.209617  452176 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:23:12.346441  452176 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:23:12.372802  452176 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:23:12.372844  452176 certs.go:194] generating shared ca certs ...
	I0717 19:23:12.372868  452176 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:12.373075  452176 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:23:12.373143  452176 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:23:12.373159  452176 certs.go:256] generating profile certs ...
	I0717 19:23:12.373233  452176 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:23:12.373256  452176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.crt with IP's: []
	I0717 19:23:12.592867  452176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.crt ...
	I0717 19:23:12.592961  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.crt: {Name:mkbe0245e3861d1e14d58e9fd743c35135897064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:12.593177  452176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key ...
	I0717 19:23:12.593234  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key: {Name:mkbdc5b8053b0739d98dea1ad2f1ed02a94695b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:12.593413  452176 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:23:12.593478  452176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt.204e9011 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.208]
	I0717 19:23:12.941214  452176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt.204e9011 ...
	I0717 19:23:12.941248  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt.204e9011: {Name:mk0fb64699a442709392fff0028a7b1c79a7580d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:12.941427  452176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011 ...
	I0717 19:23:12.941446  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011: {Name:mk3c6f46b7dc1ddf92153ca473d8878b789b9fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:12.941543  452176 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt.204e9011 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt
	I0717 19:23:12.941623  452176 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key
	I0717 19:23:12.941675  452176 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:23:12.941739  452176 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt with IP's: []
	I0717 19:23:13.157680  452176 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt ...
	I0717 19:23:13.157710  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt: {Name:mk59c30fd271251caf64e13b2de9f8ec92fe558b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:13.157862  452176 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key ...
	I0717 19:23:13.157875  452176 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key: {Name:mkc3c2c88a19cadb7816dd957d2794ad16ee10fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:23:13.158049  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:23:13.158091  452176 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:23:13.158100  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:23:13.158119  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:23:13.158141  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:23:13.158165  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:23:13.158210  452176 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:23:13.158839  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:23:13.197332  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:23:13.243629  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:23:13.273706  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:23:13.309708  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:23:13.351233  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:23:13.395869  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:23:13.441449  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:23:13.468863  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:23:13.498473  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:23:13.531412  452176 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:23:13.561623  452176 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:23:13.584776  452176 ssh_runner.go:195] Run: openssl version
	I0717 19:23:13.591453  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:23:13.604336  452176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:23:13.609309  452176 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:23:13.609370  452176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:23:13.615684  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:23:13.628563  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:23:13.647169  452176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:23:13.653915  452176 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:23:13.653980  452176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:23:13.662049  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:23:13.675581  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:23:13.687580  452176 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:13.692704  452176 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:13.692769  452176 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:23:13.699389  452176 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:23:13.712679  452176 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:23:13.718890  452176 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:23:13.718962  452176 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:23:13.719072  452176 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:23:13.719135  452176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:23:13.764734  452176 cri.go:89] found id: ""
	I0717 19:23:13.764815  452176 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:23:13.782851  452176 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:23:13.797231  452176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:23:13.819920  452176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:23:13.819945  452176 kubeadm.go:157] found existing configuration files:
	
	I0717 19:23:13.819993  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:23:13.833824  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:23:13.833894  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:23:13.846603  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:23:13.858765  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:23:13.858835  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:23:13.871740  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:23:13.885738  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:23:13.885801  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:23:13.902981  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:23:13.926797  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:23:13.926891  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:23:13.943681  452176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:23:14.104872  452176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:23:14.105846  452176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:23:14.341937  452176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:23:14.342077  452176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:23:14.342186  452176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:23:14.582453  452176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:23:14.584915  452176 out.go:204]   - Generating certificates and keys ...
	I0717 19:23:14.585028  452176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:23:14.585113  452176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:23:14.703724  452176 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:23:14.855900  452176 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:23:14.943638  452176 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:23:15.131095  452176 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:23:15.575926  452176 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:23:15.583269  452176 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0717 19:23:16.062255  452176 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:23:16.062689  452176 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	I0717 19:23:16.133592  452176 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:23:16.367843  452176 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:23:16.448469  452176 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:23:16.448854  452176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:23:16.621887  452176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:23:17.006243  452176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:23:17.565402  452176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:23:17.738583  452176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:23:17.761231  452176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:23:17.762907  452176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:23:17.762984  452176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:23:17.936404  452176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:23:17.938418  452176 out.go:204]   - Booting up control plane ...
	I0717 19:23:17.938541  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:23:17.952476  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:23:17.954959  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:23:17.956338  452176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:23:17.960930  452176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:23:57.959202  452176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:23:57.959779  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:23:57.959954  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:24:02.960288  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:24:02.960689  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:24:12.961021  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:24:12.961227  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:24:32.962004  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:24:32.962269  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:25:12.962518  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:25:12.963008  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:25:12.963040  452176 kubeadm.go:310] 
	I0717 19:25:12.963121  452176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:25:12.963205  452176 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:25:12.963215  452176 kubeadm.go:310] 
	I0717 19:25:12.963288  452176 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:25:12.963400  452176 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:25:12.963669  452176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:25:12.963692  452176 kubeadm.go:310] 
	I0717 19:25:12.963955  452176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:25:12.964065  452176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:25:12.964153  452176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:25:12.964165  452176 kubeadm.go:310] 
	I0717 19:25:12.964415  452176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:25:12.964663  452176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:25:12.964692  452176 kubeadm.go:310] 
	I0717 19:25:12.964909  452176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:25:12.965448  452176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:25:12.965939  452176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:25:12.966060  452176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:25:12.966084  452176 kubeadm.go:310] 
	I0717 19:25:12.966222  452176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:25:12.966332  452176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:25:12.966434  452176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 19:25:12.966582  452176 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-998147] and IPs [192.168.72.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:25:12.966650  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:25:13.453836  452176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:25:13.474931  452176 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:25:13.488822  452176 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:25:13.488847  452176 kubeadm.go:157] found existing configuration files:
	
	I0717 19:25:13.488912  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:25:13.502382  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:25:13.502446  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:25:13.515349  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:25:13.525516  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:25:13.525615  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:25:13.537550  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:25:13.547408  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:25:13.547483  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:25:13.559485  452176 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:25:13.571187  452176 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:25:13.571319  452176 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:25:13.583243  452176 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:25:13.829988  452176 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:27:09.879929  452176 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:27:09.880046  452176 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:27:09.881756  452176 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:27:09.881832  452176 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:27:09.881930  452176 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:27:09.882058  452176 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:27:09.882203  452176 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:27:09.882302  452176 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:27:09.884099  452176 out.go:204]   - Generating certificates and keys ...
	I0717 19:27:09.884207  452176 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:27:09.884274  452176 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:27:09.884362  452176 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:27:09.884448  452176 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:27:09.884565  452176 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:27:09.884677  452176 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:27:09.884782  452176 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:27:09.884878  452176 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:27:09.884994  452176 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:27:09.885123  452176 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:27:09.885175  452176 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:27:09.885253  452176 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:27:09.885319  452176 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:27:09.885391  452176 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:27:09.885475  452176 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:27:09.885554  452176 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:27:09.885676  452176 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:27:09.885775  452176 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:27:09.885812  452176 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:27:09.885902  452176 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:27:09.887410  452176 out.go:204]   - Booting up control plane ...
	I0717 19:27:09.887500  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:27:09.887581  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:27:09.887646  452176 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:27:09.887747  452176 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:27:09.887982  452176 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:27:09.888044  452176 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:27:09.888120  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:27:09.888286  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:27:09.888353  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:27:09.888580  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:27:09.888678  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:27:09.888871  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:27:09.888946  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:27:09.889114  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:27:09.889176  452176 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:27:09.889329  452176 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:27:09.889337  452176 kubeadm.go:310] 
	I0717 19:27:09.889390  452176 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:27:09.889450  452176 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:27:09.889457  452176 kubeadm.go:310] 
	I0717 19:27:09.889485  452176 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:27:09.889514  452176 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:27:09.889608  452176 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:27:09.889617  452176 kubeadm.go:310] 
	I0717 19:27:09.889711  452176 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:27:09.889746  452176 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:27:09.889774  452176 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:27:09.889780  452176 kubeadm.go:310] 
	I0717 19:27:09.889899  452176 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:27:09.890004  452176 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:27:09.890015  452176 kubeadm.go:310] 
	I0717 19:27:09.890191  452176 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:27:09.890307  452176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:27:09.890406  452176 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:27:09.890502  452176 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:27:09.890580  452176 kubeadm.go:310] 
	I0717 19:27:09.890593  452176 kubeadm.go:394] duration metric: took 3m56.171637144s to StartCluster
	I0717 19:27:09.890671  452176 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:27:09.890756  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:27:09.941699  452176 cri.go:89] found id: ""
	I0717 19:27:09.941732  452176 logs.go:276] 0 containers: []
	W0717 19:27:09.941741  452176 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:27:09.941747  452176 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:27:09.941818  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:27:09.977692  452176 cri.go:89] found id: ""
	I0717 19:27:09.977732  452176 logs.go:276] 0 containers: []
	W0717 19:27:09.977745  452176 logs.go:278] No container was found matching "etcd"
	I0717 19:27:09.977753  452176 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:27:09.977825  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:27:10.013209  452176 cri.go:89] found id: ""
	I0717 19:27:10.013242  452176 logs.go:276] 0 containers: []
	W0717 19:27:10.013250  452176 logs.go:278] No container was found matching "coredns"
	I0717 19:27:10.013257  452176 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:27:10.013310  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:27:10.048264  452176 cri.go:89] found id: ""
	I0717 19:27:10.048292  452176 logs.go:276] 0 containers: []
	W0717 19:27:10.048303  452176 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:27:10.048311  452176 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:27:10.048370  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:27:10.081450  452176 cri.go:89] found id: ""
	I0717 19:27:10.081498  452176 logs.go:276] 0 containers: []
	W0717 19:27:10.081511  452176 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:27:10.081518  452176 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:27:10.081598  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:27:10.113913  452176 cri.go:89] found id: ""
	I0717 19:27:10.113945  452176 logs.go:276] 0 containers: []
	W0717 19:27:10.113957  452176 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:27:10.113966  452176 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:27:10.114039  452176 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:27:10.150115  452176 cri.go:89] found id: ""
	I0717 19:27:10.150156  452176 logs.go:276] 0 containers: []
	W0717 19:27:10.150169  452176 logs.go:278] No container was found matching "kindnet"
	I0717 19:27:10.150184  452176 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:27:10.150202  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:27:10.274170  452176 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:27:10.274198  452176 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:27:10.274211  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:27:10.377449  452176 logs.go:123] Gathering logs for container status ...
	I0717 19:27:10.377491  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:27:10.419598  452176 logs.go:123] Gathering logs for kubelet ...
	I0717 19:27:10.419632  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:27:10.472151  452176 logs.go:123] Gathering logs for dmesg ...
	I0717 19:27:10.472200  452176 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 19:27:10.486331  452176 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:27:10.486383  452176 out.go:239] * 
	* 
	W0717 19:27:10.486439  452176 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:27:10.486461  452176 out.go:239] * 
	* 
	W0717 19:27:10.487275  452176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:27:10.490819  452176 out.go:177] 
	W0717 19:27:10.492087  452176 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:27:10.492143  452176 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:27:10.492161  452176 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:27:10.493554  452176 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 6 (221.390034ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:10.765025  458592 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-998147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (280.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-637675 --alsologtostderr -v=3
E0717 19:25:28.552610  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-637675 --alsologtostderr -v=3: exit status 82 (2m0.512368486s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-637675"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:25:24.959364  457976 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:25:24.959478  457976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:25:24.959485  457976 out.go:304] Setting ErrFile to fd 2...
	I0717 19:25:24.959489  457976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:25:24.959677  457976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:25:24.959933  457976 out.go:298] Setting JSON to false
	I0717 19:25:24.960024  457976 mustload.go:65] Loading cluster: embed-certs-637675
	I0717 19:25:24.960439  457976 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:25:24.960563  457976 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:25:24.960806  457976 mustload.go:65] Loading cluster: embed-certs-637675
	I0717 19:25:24.960965  457976 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:25:24.961006  457976 stop.go:39] StopHost: embed-certs-637675
	I0717 19:25:24.961543  457976 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:25:24.961602  457976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:25:24.976984  457976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0717 19:25:24.977616  457976 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:25:24.978263  457976 main.go:141] libmachine: Using API Version  1
	I0717 19:25:24.978294  457976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:25:24.978612  457976 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:25:24.981198  457976 out.go:177] * Stopping node "embed-certs-637675"  ...
	I0717 19:25:24.982552  457976 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 19:25:24.982612  457976 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:25:24.982943  457976 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 19:25:24.982979  457976 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:25:24.986231  457976 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:25:24.986757  457976 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:23:51 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:25:24.986800  457976 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:25:24.987076  457976 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:25:24.987281  457976 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:25:24.987485  457976 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:25:24.987673  457976 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:25:25.107474  457976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 19:25:25.177787  457976 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 19:25:25.225740  457976 main.go:141] libmachine: Stopping "embed-certs-637675"...
	I0717 19:25:25.225802  457976 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:25:25.227459  457976 main.go:141] libmachine: (embed-certs-637675) Calling .Stop
	I0717 19:25:25.231310  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 0/120
	I0717 19:25:26.233253  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 1/120
	I0717 19:25:27.235147  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 2/120
	I0717 19:25:28.237361  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 3/120
	I0717 19:25:29.239424  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 4/120
	I0717 19:25:30.241400  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 5/120
	I0717 19:25:31.243112  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 6/120
	I0717 19:25:32.244684  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 7/120
	I0717 19:25:33.246065  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 8/120
	I0717 19:25:34.248554  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 9/120
	I0717 19:25:35.250788  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 10/120
	I0717 19:25:36.252180  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 11/120
	I0717 19:25:37.253535  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 12/120
	I0717 19:25:38.255042  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 13/120
	I0717 19:25:39.256449  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 14/120
	I0717 19:25:40.258479  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 15/120
	I0717 19:25:41.259776  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 16/120
	I0717 19:25:42.260852  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 17/120
	I0717 19:25:43.261949  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 18/120
	I0717 19:25:44.263426  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 19/120
	I0717 19:25:45.265393  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 20/120
	I0717 19:25:46.266897  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 21/120
	I0717 19:25:47.268431  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 22/120
	I0717 19:25:48.269741  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 23/120
	I0717 19:25:49.271289  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 24/120
	I0717 19:25:50.273365  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 25/120
	I0717 19:25:51.274996  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 26/120
	I0717 19:25:52.276473  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 27/120
	I0717 19:25:53.277761  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 28/120
	I0717 19:25:54.279244  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 29/120
	I0717 19:25:55.281400  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 30/120
	I0717 19:25:56.282725  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 31/120
	I0717 19:25:57.284091  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 32/120
	I0717 19:25:58.285355  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 33/120
	I0717 19:25:59.286635  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 34/120
	I0717 19:26:00.288456  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 35/120
	I0717 19:26:01.289928  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 36/120
	I0717 19:26:02.291317  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 37/120
	I0717 19:26:03.292576  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 38/120
	I0717 19:26:04.293800  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 39/120
	I0717 19:26:05.295830  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 40/120
	I0717 19:26:06.297212  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 41/120
	I0717 19:26:07.298934  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 42/120
	I0717 19:26:08.301209  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 43/120
	I0717 19:26:09.302782  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 44/120
	I0717 19:26:10.304708  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 45/120
	I0717 19:26:11.305948  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 46/120
	I0717 19:26:12.307221  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 47/120
	I0717 19:26:13.308409  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 48/120
	I0717 19:26:14.309823  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 49/120
	I0717 19:26:15.311884  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 50/120
	I0717 19:26:16.313501  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 51/120
	I0717 19:26:17.314687  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 52/120
	I0717 19:26:18.315984  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 53/120
	I0717 19:26:19.317253  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 54/120
	I0717 19:26:20.319211  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 55/120
	I0717 19:26:21.320266  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 56/120
	I0717 19:26:22.322001  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 57/120
	I0717 19:26:23.323162  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 58/120
	I0717 19:26:24.324698  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 59/120
	I0717 19:26:25.326926  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 60/120
	I0717 19:26:26.328037  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 61/120
	I0717 19:26:27.329446  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 62/120
	I0717 19:26:28.330715  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 63/120
	I0717 19:26:29.331924  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 64/120
	I0717 19:26:30.333622  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 65/120
	I0717 19:26:31.334773  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 66/120
	I0717 19:26:32.335962  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 67/120
	I0717 19:26:33.337275  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 68/120
	I0717 19:26:34.338772  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 69/120
	I0717 19:26:35.341082  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 70/120
	I0717 19:26:36.342181  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 71/120
	I0717 19:26:37.343295  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 72/120
	I0717 19:26:38.344572  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 73/120
	I0717 19:26:39.346273  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 74/120
	I0717 19:26:40.348429  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 75/120
	I0717 19:26:41.349811  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 76/120
	I0717 19:26:42.351287  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 77/120
	I0717 19:26:43.352541  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 78/120
	I0717 19:26:44.353813  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 79/120
	I0717 19:26:45.356108  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 80/120
	I0717 19:26:46.357513  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 81/120
	I0717 19:26:47.359023  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 82/120
	I0717 19:26:48.360403  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 83/120
	I0717 19:26:49.361789  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 84/120
	I0717 19:26:50.363897  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 85/120
	I0717 19:26:51.365284  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 86/120
	I0717 19:26:52.367831  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 87/120
	I0717 19:26:53.369289  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 88/120
	I0717 19:26:54.370962  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 89/120
	I0717 19:26:55.372521  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 90/120
	I0717 19:26:56.373980  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 91/120
	I0717 19:26:57.375394  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 92/120
	I0717 19:26:58.376937  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 93/120
	I0717 19:26:59.378263  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 94/120
	I0717 19:27:00.380536  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 95/120
	I0717 19:27:01.382159  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 96/120
	I0717 19:27:02.383423  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 97/120
	I0717 19:27:03.384999  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 98/120
	I0717 19:27:04.386576  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 99/120
	I0717 19:27:05.388729  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 100/120
	I0717 19:27:06.390255  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 101/120
	I0717 19:27:07.391740  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 102/120
	I0717 19:27:08.392947  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 103/120
	I0717 19:27:09.394576  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 104/120
	I0717 19:27:10.396573  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 105/120
	I0717 19:27:11.397974  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 106/120
	I0717 19:27:12.399505  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 107/120
	I0717 19:27:13.400686  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 108/120
	I0717 19:27:14.401844  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 109/120
	I0717 19:27:15.404321  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 110/120
	I0717 19:27:16.405547  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 111/120
	I0717 19:27:17.406777  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 112/120
	I0717 19:27:18.407924  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 113/120
	I0717 19:27:19.408916  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 114/120
	I0717 19:27:20.410775  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 115/120
	I0717 19:27:21.411893  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 116/120
	I0717 19:27:22.413093  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 117/120
	I0717 19:27:23.414555  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 118/120
	I0717 19:27:24.415561  457976 main.go:141] libmachine: (embed-certs-637675) Waiting for machine to stop 119/120
	I0717 19:27:25.416836  457976 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 19:27:25.416918  457976 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:27:25.419009  457976 out.go:177] 
	W0717 19:27:25.420266  457976 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:27:25.420281  457976 out.go:239] * 
	* 
	W0717 19:27:25.423693  457976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:27:25.425011  457976 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-637675 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675: exit status 3 (18.437797922s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:43.864853  458770 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0717 19:27:43.864880  458770 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-637675" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-713715 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-713715 --alsologtostderr -v=3: exit status 82 (2m0.500232282s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-713715"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:25:30.958853  458061 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:25:30.959104  458061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:25:30.959115  458061 out.go:304] Setting ErrFile to fd 2...
	I0717 19:25:30.959120  458061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:25:30.959371  458061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:25:30.959642  458061 out.go:298] Setting JSON to false
	I0717 19:25:30.959740  458061 mustload.go:65] Loading cluster: no-preload-713715
	I0717 19:25:30.960090  458061 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:25:30.960176  458061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:25:30.960368  458061 mustload.go:65] Loading cluster: no-preload-713715
	I0717 19:25:30.960514  458061 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:25:30.960562  458061 stop.go:39] StopHost: no-preload-713715
	I0717 19:25:30.960993  458061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:25:30.961041  458061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:25:30.977014  458061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0717 19:25:30.977564  458061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:25:30.978254  458061 main.go:141] libmachine: Using API Version  1
	I0717 19:25:30.978290  458061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:25:30.978657  458061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:25:30.981346  458061 out.go:177] * Stopping node "no-preload-713715"  ...
	I0717 19:25:30.982797  458061 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 19:25:30.982840  458061 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:25:30.983077  458061 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 19:25:30.983116  458061 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:25:30.986111  458061 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:25:30.986544  458061 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:23:26 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:25:30.986575  458061 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:25:30.986693  458061 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:25:30.986867  458061 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:25:30.987031  458061 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:25:30.987197  458061 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:25:31.085672  458061 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 19:25:31.147739  458061 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 19:25:31.208158  458061 main.go:141] libmachine: Stopping "no-preload-713715"...
	I0717 19:25:31.208190  458061 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:25:31.209596  458061 main.go:141] libmachine: (no-preload-713715) Calling .Stop
	I0717 19:25:31.213439  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 0/120
	I0717 19:25:32.215291  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 1/120
	I0717 19:25:33.216604  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 2/120
	I0717 19:25:34.217974  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 3/120
	I0717 19:25:35.219452  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 4/120
	I0717 19:25:36.221563  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 5/120
	I0717 19:25:37.223001  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 6/120
	I0717 19:25:38.224386  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 7/120
	I0717 19:25:39.225786  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 8/120
	I0717 19:25:40.227049  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 9/120
	I0717 19:25:41.229290  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 10/120
	I0717 19:25:42.230953  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 11/120
	I0717 19:25:43.232589  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 12/120
	I0717 19:25:44.233915  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 13/120
	I0717 19:25:45.235262  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 14/120
	I0717 19:25:46.237363  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 15/120
	I0717 19:25:47.239125  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 16/120
	I0717 19:25:48.240460  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 17/120
	I0717 19:25:49.242219  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 18/120
	I0717 19:25:50.243788  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 19/120
	I0717 19:25:51.245729  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 20/120
	I0717 19:25:52.246971  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 21/120
	I0717 19:25:53.248254  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 22/120
	I0717 19:25:54.249476  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 23/120
	I0717 19:25:55.250822  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 24/120
	I0717 19:25:56.252849  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 25/120
	I0717 19:25:57.254740  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 26/120
	I0717 19:25:58.256195  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 27/120
	I0717 19:25:59.257451  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 28/120
	I0717 19:26:00.258777  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 29/120
	I0717 19:26:01.261069  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 30/120
	I0717 19:26:02.262537  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 31/120
	I0717 19:26:03.263834  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 32/120
	I0717 19:26:04.265346  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 33/120
	I0717 19:26:05.267222  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 34/120
	I0717 19:26:06.269291  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 35/120
	I0717 19:26:07.270618  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 36/120
	I0717 19:26:08.272110  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 37/120
	I0717 19:26:09.273504  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 38/120
	I0717 19:26:10.274961  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 39/120
	I0717 19:26:11.277235  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 40/120
	I0717 19:26:12.278618  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 41/120
	I0717 19:26:13.279944  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 42/120
	I0717 19:26:14.281381  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 43/120
	I0717 19:26:15.282766  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 44/120
	I0717 19:26:16.285073  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 45/120
	I0717 19:26:17.286983  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 46/120
	I0717 19:26:18.288303  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 47/120
	I0717 19:26:19.289480  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 48/120
	I0717 19:26:20.290816  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 49/120
	I0717 19:26:21.293227  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 50/120
	I0717 19:26:22.295014  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 51/120
	I0717 19:26:23.296278  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 52/120
	I0717 19:26:24.297974  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 53/120
	I0717 19:26:25.299764  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 54/120
	I0717 19:26:26.302057  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 55/120
	I0717 19:26:27.303518  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 56/120
	I0717 19:26:28.304942  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 57/120
	I0717 19:26:29.307040  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 58/120
	I0717 19:26:30.308532  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 59/120
	I0717 19:26:31.310786  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 60/120
	I0717 19:26:32.312414  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 61/120
	I0717 19:26:33.313792  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 62/120
	I0717 19:26:34.315198  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 63/120
	I0717 19:26:35.316634  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 64/120
	I0717 19:26:36.318921  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 65/120
	I0717 19:26:37.320239  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 66/120
	I0717 19:26:38.321565  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 67/120
	I0717 19:26:39.322882  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 68/120
	I0717 19:26:40.324330  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 69/120
	I0717 19:26:41.326530  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 70/120
	I0717 19:26:42.328013  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 71/120
	I0717 19:26:43.329483  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 72/120
	I0717 19:26:44.331096  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 73/120
	I0717 19:26:45.332528  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 74/120
	I0717 19:26:46.334516  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 75/120
	I0717 19:26:47.335800  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 76/120
	I0717 19:26:48.337185  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 77/120
	I0717 19:26:49.339379  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 78/120
	I0717 19:26:50.340969  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 79/120
	I0717 19:26:51.343335  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 80/120
	I0717 19:26:52.345015  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 81/120
	I0717 19:26:53.346678  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 82/120
	I0717 19:26:54.348351  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 83/120
	I0717 19:26:55.349920  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 84/120
	I0717 19:26:56.352115  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 85/120
	I0717 19:26:57.353561  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 86/120
	I0717 19:26:58.355221  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 87/120
	I0717 19:26:59.356776  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 88/120
	I0717 19:27:00.358228  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 89/120
	I0717 19:27:01.359700  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 90/120
	I0717 19:27:02.361141  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 91/120
	I0717 19:27:03.362607  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 92/120
	I0717 19:27:04.364024  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 93/120
	I0717 19:27:05.365736  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 94/120
	I0717 19:27:06.368006  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 95/120
	I0717 19:27:07.369679  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 96/120
	I0717 19:27:08.371226  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 97/120
	I0717 19:27:09.372874  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 98/120
	I0717 19:27:10.374362  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 99/120
	I0717 19:27:11.376763  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 100/120
	I0717 19:27:12.378248  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 101/120
	I0717 19:27:13.379993  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 102/120
	I0717 19:27:14.381401  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 103/120
	I0717 19:27:15.383077  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 104/120
	I0717 19:27:16.385114  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 105/120
	I0717 19:27:17.386620  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 106/120
	I0717 19:27:18.387950  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 107/120
	I0717 19:27:19.389251  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 108/120
	I0717 19:27:20.390581  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 109/120
	I0717 19:27:21.392731  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 110/120
	I0717 19:27:22.394098  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 111/120
	I0717 19:27:23.395547  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 112/120
	I0717 19:27:24.396923  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 113/120
	I0717 19:27:25.398545  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 114/120
	I0717 19:27:26.400521  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 115/120
	I0717 19:27:27.401873  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 116/120
	I0717 19:27:28.403164  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 117/120
	I0717 19:27:29.404551  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 118/120
	I0717 19:27:30.405851  458061 main.go:141] libmachine: (no-preload-713715) Waiting for machine to stop 119/120
	I0717 19:27:31.407167  458061 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 19:27:31.407248  458061 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:27:31.409021  458061 out.go:177] 
	W0717 19:27:31.410301  458061 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:27:31.410322  458061 out.go:239] * 
	* 
	W0717 19:27:31.413424  458061 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:27:31.414618  458061 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-713715 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
E0717 19:27:31.434073  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:27:32.425263  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.431167  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.441408  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.461680  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.502020  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.582384  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:32.742912  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:33.063827  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:33.704200  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:34.985327  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:37.546493  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:27:37.608992  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:27:42.666786  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715: exit status 3 (18.592137641s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:50.008824  458816 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host
	E0717 19:27:50.008845  458816 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-713715" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-378944 --alsologtostderr -v=3
E0717 19:26:05.364627  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:07.925574  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:09.513709  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:26:13.046606  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:23.287773  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:35.251950  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.257189  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.267513  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.287817  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.328970  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.409735  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.570236  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:35.890849  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:36.531918  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:37.812153  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:40.373035  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:43.768773  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:45.493553  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:55.733840  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:26:56.647171  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.652464  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.662815  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.683179  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.723497  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.803856  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:56.964344  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:57.284992  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:57.925203  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:26:59.205916  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:27:01.766672  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:27:06.887033  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-378944 --alsologtostderr -v=3: exit status 82 (2m0.502158488s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-378944"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:26:05.028241  458332 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:26:05.028359  458332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:26:05.028369  458332 out.go:304] Setting ErrFile to fd 2...
	I0717 19:26:05.028375  458332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:26:05.028561  458332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:26:05.028961  458332 out.go:298] Setting JSON to false
	I0717 19:26:05.029086  458332 mustload.go:65] Loading cluster: default-k8s-diff-port-378944
	I0717 19:26:05.029448  458332 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:26:05.029545  458332 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:26:05.029779  458332 mustload.go:65] Loading cluster: default-k8s-diff-port-378944
	I0717 19:26:05.029921  458332 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:26:05.029962  458332 stop.go:39] StopHost: default-k8s-diff-port-378944
	I0717 19:26:05.030343  458332 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:26:05.030402  458332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:26:05.045132  458332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0717 19:26:05.045643  458332 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:26:05.046289  458332 main.go:141] libmachine: Using API Version  1
	I0717 19:26:05.046309  458332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:26:05.046669  458332 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:26:05.049263  458332 out.go:177] * Stopping node "default-k8s-diff-port-378944"  ...
	I0717 19:26:05.050937  458332 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 19:26:05.050965  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:26:05.051213  458332 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 19:26:05.051246  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:26:05.053841  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:26:05.054205  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:24:33 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:26:05.054231  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:26:05.054334  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:26:05.054524  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:26:05.054715  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:26:05.054867  458332 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:26:05.172145  458332 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 19:26:05.237614  458332 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 19:26:05.296905  458332 main.go:141] libmachine: Stopping "default-k8s-diff-port-378944"...
	I0717 19:26:05.296946  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:26:05.298574  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Stop
	I0717 19:26:05.302224  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 0/120
	I0717 19:26:06.303267  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 1/120
	I0717 19:26:07.304451  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 2/120
	I0717 19:26:08.305541  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 3/120
	I0717 19:26:09.307169  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 4/120
	I0717 19:26:10.308989  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 5/120
	I0717 19:26:11.310220  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 6/120
	I0717 19:26:12.311299  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 7/120
	I0717 19:26:13.312681  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 8/120
	I0717 19:26:14.314616  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 9/120
	I0717 19:26:15.316005  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 10/120
	I0717 19:26:16.316974  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 11/120
	I0717 19:26:17.318450  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 12/120
	I0717 19:26:18.319557  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 13/120
	I0717 19:26:19.320697  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 14/120
	I0717 19:26:20.322420  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 15/120
	I0717 19:26:21.323386  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 16/120
	I0717 19:26:22.324510  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 17/120
	I0717 19:26:23.325713  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 18/120
	I0717 19:26:24.326756  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 19/120
	I0717 19:26:25.328612  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 20/120
	I0717 19:26:26.330598  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 21/120
	I0717 19:26:27.331625  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 22/120
	I0717 19:26:28.333227  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 23/120
	I0717 19:26:29.334781  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 24/120
	I0717 19:26:30.336178  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 25/120
	I0717 19:26:31.337434  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 26/120
	I0717 19:26:32.338497  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 27/120
	I0717 19:26:33.339637  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 28/120
	I0717 19:26:34.340698  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 29/120
	I0717 19:26:35.342408  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 30/120
	I0717 19:26:36.343438  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 31/120
	I0717 19:26:37.344279  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 32/120
	I0717 19:26:38.345556  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 33/120
	I0717 19:26:39.346934  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 34/120
	I0717 19:26:40.348780  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 35/120
	I0717 19:26:41.350011  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 36/120
	I0717 19:26:42.351392  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 37/120
	I0717 19:26:43.352915  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 38/120
	I0717 19:26:44.354187  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 39/120
	I0717 19:26:45.356186  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 40/120
	I0717 19:26:46.357668  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 41/120
	I0717 19:26:47.359602  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 42/120
	I0717 19:26:48.361094  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 43/120
	I0717 19:26:49.362310  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 44/120
	I0717 19:26:50.364248  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 45/120
	I0717 19:26:51.366503  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 46/120
	I0717 19:26:52.367925  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 47/120
	I0717 19:26:53.369289  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 48/120
	I0717 19:26:54.371164  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 49/120
	I0717 19:26:55.373261  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 50/120
	I0717 19:26:56.374560  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 51/120
	I0717 19:26:57.376088  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 52/120
	I0717 19:26:58.377585  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 53/120
	I0717 19:26:59.378893  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 54/120
	I0717 19:27:00.380985  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 55/120
	I0717 19:27:01.382990  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 56/120
	I0717 19:27:02.384352  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 57/120
	I0717 19:27:03.385496  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 58/120
	I0717 19:27:04.386779  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 59/120
	I0717 19:27:05.388776  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 60/120
	I0717 19:27:06.390139  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 61/120
	I0717 19:27:07.391547  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 62/120
	I0717 19:27:08.393097  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 63/120
	I0717 19:27:09.394724  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 64/120
	I0717 19:27:10.396246  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 65/120
	I0717 19:27:11.398062  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 66/120
	I0717 19:27:12.399615  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 67/120
	I0717 19:27:13.400920  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 68/120
	I0717 19:27:14.402601  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 69/120
	I0717 19:27:15.404719  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 70/120
	I0717 19:27:16.406667  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 71/120
	I0717 19:27:17.407944  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 72/120
	I0717 19:27:18.409476  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 73/120
	I0717 19:27:19.410533  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 74/120
	I0717 19:27:20.412192  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 75/120
	I0717 19:27:21.413157  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 76/120
	I0717 19:27:22.414710  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 77/120
	I0717 19:27:23.415694  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 78/120
	I0717 19:27:24.416629  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 79/120
	I0717 19:27:25.418476  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 80/120
	I0717 19:27:26.419723  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 81/120
	I0717 19:27:27.420874  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 82/120
	I0717 19:27:28.422087  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 83/120
	I0717 19:27:29.423363  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 84/120
	I0717 19:27:30.425178  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 85/120
	I0717 19:27:31.426806  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 86/120
	I0717 19:27:32.427950  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 87/120
	I0717 19:27:33.429358  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 88/120
	I0717 19:27:34.430504  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 89/120
	I0717 19:27:35.432563  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 90/120
	I0717 19:27:36.433912  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 91/120
	I0717 19:27:37.435120  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 92/120
	I0717 19:27:38.436449  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 93/120
	I0717 19:27:39.437717  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 94/120
	I0717 19:27:40.439739  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 95/120
	I0717 19:27:41.441231  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 96/120
	I0717 19:27:42.442354  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 97/120
	I0717 19:27:43.443807  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 98/120
	I0717 19:27:44.445239  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 99/120
	I0717 19:27:45.447551  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 100/120
	I0717 19:27:46.448971  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 101/120
	I0717 19:27:47.450120  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 102/120
	I0717 19:27:48.451525  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 103/120
	I0717 19:27:49.452855  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 104/120
	I0717 19:27:50.454882  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 105/120
	I0717 19:27:51.456400  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 106/120
	I0717 19:27:52.457680  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 107/120
	I0717 19:27:53.459045  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 108/120
	I0717 19:27:54.460694  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 109/120
	I0717 19:27:55.463231  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 110/120
	I0717 19:27:56.464368  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 111/120
	I0717 19:27:57.465859  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 112/120
	I0717 19:27:58.467240  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 113/120
	I0717 19:27:59.468535  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 114/120
	I0717 19:28:00.470574  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 115/120
	I0717 19:28:01.471874  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 116/120
	I0717 19:28:02.473105  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 117/120
	I0717 19:28:03.474536  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 118/120
	I0717 19:28:04.475950  458332 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for machine to stop 119/120
	I0717 19:28:05.477270  458332 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 19:28:05.477357  458332 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 19:28:05.479482  458332 out.go:177] 
	W0717 19:28:05.480976  458332 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 19:28:05.480993  458332 out.go:239] * 
	* 
	W0717 19:28:05.484132  458332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:28:05.485507  458332 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-378944 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
E0717 19:28:10.186595  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:28:13.388893  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:28:18.570042  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944: exit status 3 (18.569378005s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:28:24.056863  459197 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host
	E0717 19:28:24.056891  459197 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-378944" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-998147 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-998147 create -f testdata/busybox.yaml: exit status 1 (45.508937ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-998147" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-998147 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 6 (217.066157ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:11.029082  458632 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-998147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 6 (215.605673ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:11.244567  458662 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-998147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-998147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 19:27:13.091251  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 19:27:16.215064  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:27:17.127950  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:27:24.729159  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-998147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.28974843s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-998147 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-998147 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-998147 describe deploy/metrics-server -n kube-system: exit status 1 (45.853309ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-998147" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-998147 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 6 (218.568183ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:29:07.798081  459608 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-998147" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675: exit status 3 (3.16768824s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:47.032875  458881 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0717 19:27:47.032900  458881 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-637675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 19:27:49.705404  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:49.710657  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:49.720903  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:49.741130  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:49.781384  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:49.861728  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-637675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155051043s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-637675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
E0717 19:27:54.825351  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675: exit status 3 (3.06113006s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:56.248884  459006 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host
	E0717 19:27:56.248908  459006 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.140:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-637675" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
E0717 19:27:50.022438  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:50.343015  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:50.983656  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:52.264411  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:27:52.907797  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715: exit status 3 (3.167800223s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:27:53.176967  458967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host
	E0717 19:27:53.176991  458967 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-713715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-713715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153096277s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-713715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
E0717 19:27:59.945605  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715: exit status 3 (3.062745463s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:28:02.392907  459101 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host
	E0717 19:28:02.392935  459101 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.66:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-713715" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944: exit status 3 (3.167796281s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:28:27.224898  459279 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host
	E0717 19:28:27.224921  459279 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-378944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 19:28:30.666898  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-378944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.1534867s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-378944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944: exit status 3 (3.062294777s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:28:36.440881  459400 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host
	E0717 19:28:36.440904  459400 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.238:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-378944" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (756.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0717 19:29:11.627891  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:29:13.298413  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:29:19.095778  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:29:33.779480  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:29:40.490339  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:29:47.588827  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:30:05.952080  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:30:14.739840  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:30:15.274724  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:30:16.270095  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:30:33.548367  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:31:02.804299  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:31:29.000062  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:31:30.489794  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:31:35.251291  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:31:36.660270  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:31:56.646786  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:32:02.936410  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:32:13.091085  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 19:32:24.330811  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:32:32.425657  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:32:49.704686  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:33:00.111237  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:33:17.389401  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
E0717 19:33:52.816876  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:34:20.500754  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:34:47.589103  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:35:05.951546  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:36:02.804550  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:36:35.251076  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
E0717 19:36:56.647883  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
E0717 19:37:13.091565  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 19:37:32.424913  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m32.865806983s)

                                                
                                                
-- stdout --
	* [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	* 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	* 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-998147 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (239.504156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25: (1.794006365s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.385138031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245306385107453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13fb0995-ca1b-47e8-8027-8cbbcef2c4d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.385858398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de91f4f6-ef76-44c9-84bb-a7d94a324195 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.385931528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de91f4f6-ef76-44c9-84bb-a7d94a324195 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.386018479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=de91f4f6-ef76-44c9-84bb-a7d94a324195 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.420547013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf778703-c382-4cdc-9e0c-076e5a6c49fd name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.420626533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf778703-c382-4cdc-9e0c-076e5a6c49fd name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.421707146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5319de75-0957-4a5b-8371-68fde9f5d05b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.422158765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245306422139563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5319de75-0957-4a5b-8371-68fde9f5d05b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.422876989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efbb05d6-ab1b-4eb3-9097-e126e7c5ee4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.422937059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efbb05d6-ab1b-4eb3-9097-e126e7c5ee4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.423015501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=efbb05d6-ab1b-4eb3-9097-e126e7c5ee4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.457019834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78a5d09a-d25f-4984-9b48-7edab1bfd429 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.457117475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78a5d09a-d25f-4984-9b48-7edab1bfd429 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.459029074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95b7fe1f-3204-48e0-8891-f4e98ad27104 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.459405153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245306459384455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95b7fe1f-3204-48e0-8891-f4e98ad27104 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.460113008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e699789-d4ed-4c55-969d-06913df393d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.460179621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e699789-d4ed-4c55-969d-06913df393d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.460221886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4e699789-d4ed-4c55-969d-06913df393d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.498476833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8e1a446-05e9-42f2-a4b4-690de96a397e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.498637953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8e1a446-05e9-42f2-a4b4-690de96a397e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.500083061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe30c603-3810-48cc-9d8b-ea64312554aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.500602026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245306500580700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe30c603-3810-48cc-9d8b-ea64312554aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.501250299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d39ccecb-4354-40a7-9dfe-940021422cff name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.501327102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d39ccecb-4354-40a7-9dfe-940021422cff name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:41:46 old-k8s-version-998147 crio[650]: time="2024-07-17 19:41:46.501386816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d39ccecb-4354-40a7-9dfe-940021422cff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 19:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052125] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.749399] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.651884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.750489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.317708] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.064289] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056621] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.217924] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129076] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.259232] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.636882] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063978] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.692971] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[ +13.037868] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 19:37] systemd-fstab-generator[5048]: Ignoring "noauto" option for root device
	[Jul17 19:39] systemd-fstab-generator[5324]: Ignoring "noauto" option for root device
	[  +0.060287] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:41:46 up 8 min,  0 users,  load average: 0.03, 0.10, 0.06
	Linux old-k8s-version-998147 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc00035e480, 0x48ab5d6, 0x3, 0xc000bc1830, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc00035e480, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bc1830, 0x24, 0x0, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net.(*Dialer).DialContext(0xc000ba8720, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bc1830, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000babc80, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bc1830, 0x24, 0x60, 0x7f0bd42f3098, 0x118, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net/http.(*Transport).dial(0xc00068c000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bc1830, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net/http.(*Transport).dialConn(0xc00068c000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00097c3c0, 0x5, 0xc000bc1830, 0x24, 0x0, 0xc0007f25a0, ...)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: net/http.(*Transport).dialConnFor(0xc00068c000, 0xc0000d06e0)
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]: created by net/http.(*Transport).queueForDial
	Jul 17 19:41:43 old-k8s-version-998147 kubelet[5499]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 17 19:41:44 old-k8s-version-998147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 17 19:41:44 old-k8s-version-998147 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 19:41:44 old-k8s-version-998147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 19:41:44 old-k8s-version-998147 kubelet[5558]: I0717 19:41:44.252134    5558 server.go:416] Version: v1.20.0
	Jul 17 19:41:44 old-k8s-version-998147 kubelet[5558]: I0717 19:41:44.252411    5558 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 19:41:44 old-k8s-version-998147 kubelet[5558]: I0717 19:41:44.255179    5558 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 19:41:44 old-k8s-version-998147 kubelet[5558]: W0717 19:41:44.256580    5558 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 19:41:44 old-k8s-version-998147 kubelet[5558]: I0717 19:41:44.256775    5558 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (240.371016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-998147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (756.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 19:37:49.705072  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713715 -n no-preload-713715
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:46:44.890334245 +0000 UTC m=+6261.207351913
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713715 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-713715 logs -n 25: (2.208571787s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.493534252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245606493502023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e57caa1-5be1-4e33-801b-b6184943720c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.494061196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=157f1e19-ec95-46b1-bc0d-d79cce2487b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.494126991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=157f1e19-ec95-46b1-bc0d-d79cce2487b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.494399386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=157f1e19-ec95-46b1-bc0d-d79cce2487b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.531380592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa83fe99-921b-4bd1-b198-ade40bb47461 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.531834801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa83fe99-921b-4bd1-b198-ade40bb47461 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.534188168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afa735d4-2c98-4084-ae21-3e884abb9212 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.536169353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245606536143597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa735d4-2c98-4084-ae21-3e884abb9212 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.538390472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7adaf90-e8bf-431c-93b9-13473e12648d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.539205831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7adaf90-e8bf-431c-93b9-13473e12648d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.540934330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7adaf90-e8bf-431c-93b9-13473e12648d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.583187702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9bee2ef-4fe5-427c-a5fe-ccc4e4c4a195 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.583368921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9bee2ef-4fe5-427c-a5fe-ccc4e4c4a195 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.584626864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30524de6-70ed-472e-9319-0cf6adac6f5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.585149404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245606585119065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30524de6-70ed-472e-9319-0cf6adac6f5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.585723476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c0585a2-3914-4d36-bf68-0be2fb2057ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.585820077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c0585a2-3914-4d36-bf68-0be2fb2057ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.586095461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c0585a2-3914-4d36-bf68-0be2fb2057ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.626955418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e6e888b-2cc0-4843-ad0a-c0d26c5ede29 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.627045557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e6e888b-2cc0-4843-ad0a-c0d26c5ede29 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.628618680Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=723a6ca9-46c7-4874-a2dc-24075ff058c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.628949855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245606628929768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=723a6ca9-46c7-4874-a2dc-24075ff058c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.629566006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c616f068-1c82-4962-8824-eb7763747941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.629635335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c616f068-1c82-4962-8824-eb7763747941 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:46:46 no-preload-713715 crio[735]: time="2024-07-17 19:46:46.629837105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c616f068-1c82-4962-8824-eb7763747941 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2b43922786ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   7bea569d68669       storage-provisioner
	e205ae72a0b56       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   68f1705638706       busybox
	9015174934a8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e0d7cb5205bf8       coredns-5cfdc65f69-hk8t7
	7511bf4f30ac3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   7bea569d68669       storage-provisioner
	ab5470bd76139       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      13 minutes ago      Running             kube-proxy                1                   019ac1b79365a       kube-proxy-x85f5
	e14420efe38fa       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      13 minutes ago      Running             kube-controller-manager   1                   9a227e350da7e       kube-controller-manager-no-preload-713715
	5b404425859ea       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      13 minutes ago      Running             kube-scheduler            1                   d2fb4c975840e       kube-scheduler-no-preload-713715
	ade9a3d882a93       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      13 minutes ago      Running             etcd                      1                   cd850abcfceb7       etcd-no-preload-713715
	94d1d32be33b0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      13 minutes ago      Running             kube-apiserver            1                   b48faef64e337       kube-apiserver-no-preload-713715
	
	
	==> coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41862 - 58330 "HINFO IN 1092279852445007707.3091120396433038524. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010275096s
	
	
	==> describe nodes <==
	Name:               no-preload-713715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-713715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=no-preload-713715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_25_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:25:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-713715
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:46:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:43:59 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:43:59 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:43:59 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:43:59 +0000   Wed, 17 Jul 2024 19:33:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.66
	  Hostname:    no-preload-713715
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf73da8038174625a1d5606b328ec5a5
	  System UUID:                bf73da80-3817-4625-a1d5-606b328ec5a5
	  Boot ID:                    2eb53699-ac94-4175-a9fd-bae1ddb1628c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5cfdc65f69-hk8t7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-no-preload-713715                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-no-preload-713715             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-no-preload-713715    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-x85f5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-no-preload-713715             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-78fcd8795b-q2jgb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-713715 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-713715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-713715 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node no-preload-713715 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-713715 event: Registered Node no-preload-713715 in Controller
	  Normal  CIDRAssignmentFailed     21m                cidrAllocator    Node no-preload-713715 status is now: CIDRAssignmentFailed
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-713715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-713715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-713715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-713715 event: Registered Node no-preload-713715 in Controller
	
	
	==> dmesg <==
	[Jul17 19:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050198] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040004] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.530370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.388984] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.112676] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.056336] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065192] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.205948] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.140136] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.311341] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[Jul17 19:33] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.058452] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981389] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +3.511450] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.808199] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.722114] systemd-fstab-generator[2057]: Ignoring "noauto" option for root device
	[  +4.856604] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] <==
	{"level":"warn","ts":"2024-07-17T19:34:00.598266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"632.477665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:133"}
	{"level":"warn","ts":"2024-07-17T19:34:00.598539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"484.046968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-07-17T19:34:00.600045Z","caller":"traceutil/trace.go:171","msg":"trace[1766139772] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:596; }","duration":"485.554791ms","start":"2024-07-17T19:34:00.11448Z","end":"2024-07-17T19:34:00.600034Z","steps":["trace[1766139772] 'agreement among raft nodes before linearized reading'  (duration: 483.987517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:00.6001Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.114448Z","time spent":"485.640285ms","remote":"127.0.0.1:48958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-17T19:34:00.600592Z","caller":"traceutil/trace.go:171","msg":"trace[2068860945] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:596; }","duration":"634.806147ms","start":"2024-07-17T19:33:59.965777Z","end":"2024-07-17T19:34:00.600583Z","steps":["trace[2068860945] 'agreement among raft nodes before linearized reading'  (duration: 632.416438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:00.601159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:33:59.965754Z","time spent":"635.202055ms","remote":"127.0.0.1:48826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":157,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
	{"level":"info","ts":"2024-07-17T19:34:00.600752Z","caller":"traceutil/trace.go:171","msg":"trace[1561281354] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb; range_end:; response_count:1; response_revision:596; }","duration":"420.060105ms","start":"2024-07-17T19:34:00.180682Z","end":"2024-07-17T19:34:00.600742Z","steps":["trace[1561281354] 'agreement among raft nodes before linearized reading'  (duration: 418.974135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:00.601469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.180647Z","time spent":"420.811272ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4363,"request content":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" "}
	{"level":"warn","ts":"2024-07-17T19:34:01.30624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"424.561031ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:34:01.306446Z","caller":"traceutil/trace.go:171","msg":"trace[1464831555] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:597; }","duration":"424.8001ms","start":"2024-07-17T19:34:00.881633Z","end":"2024-07-17T19:34:01.306433Z","steps":["trace[1464831555] 'range keys from in-memory index tree'  (duration: 424.549355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.306763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.037806ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:34:01.306886Z","caller":"traceutil/trace.go:171","msg":"trace[1816133111] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:597; }","duration":"416.165039ms","start":"2024-07-17T19:34:00.890711Z","end":"2024-07-17T19:34:01.306876Z","steps":["trace[1816133111] 'range keys from in-memory index tree'  (duration: 416.029974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.672795ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6987775460532258506 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" mod_revision:568 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" value_size:830 lease:6987775460532257839 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:34:01.307179Z","caller":"traceutil/trace.go:171","msg":"trace[718222474] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"641.921059ms","start":"2024-07-17T19:34:00.665252Z","end":"2024-07-17T19:34:01.307173Z","steps":["trace[718222474] 'read index received'  (duration: 263.916713ms)","trace[718222474] 'applied index is now lower than readState.Index'  (duration: 378.003281ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:34:01.307251Z","caller":"traceutil/trace.go:171","msg":"trace[732177036] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"695.093444ms","start":"2024-07-17T19:34:00.61215Z","end":"2024-07-17T19:34:01.307244Z","steps":["trace[732177036] 'process raft request'  (duration: 317.207082ms)","trace[732177036] 'compare'  (duration: 377.122159ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:34:01.30739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.612141Z","time spent":"695.141402ms","remote":"127.0.0.1:48880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":925,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" mod_revision:568 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" value_size:830 lease:6987775460532257839 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" > >"}
	{"level":"warn","ts":"2024-07-17T19:34:01.307588Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"695.543175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:475"}
	{"level":"info","ts":"2024-07-17T19:34:01.307635Z","caller":"traceutil/trace.go:171","msg":"trace[234859674] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:598; }","duration":"695.59045ms","start":"2024-07-17T19:34:00.612038Z","end":"2024-07-17T19:34:01.307628Z","steps":["trace[234859674] 'agreement among raft nodes before linearized reading'  (duration: 695.498987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.612023Z","time spent":"695.643812ms","remote":"127.0.0.1:49028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":499,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-17T19:34:01.307861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"626.99886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" ","response":"range_response_count:1 size:4339"}
	{"level":"info","ts":"2024-07-17T19:34:01.307903Z","caller":"traceutil/trace.go:171","msg":"trace[1564775303] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb; range_end:; response_count:1; response_revision:598; }","duration":"627.041019ms","start":"2024-07-17T19:34:00.680856Z","end":"2024-07-17T19:34:01.307897Z","steps":["trace[1564775303] 'agreement among raft nodes before linearized reading'  (duration: 626.949392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.680814Z","time spent":"627.127734ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4363,"request content":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" "}
	{"level":"info","ts":"2024-07-17T19:43:14.54714Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":817}
	{"level":"info","ts":"2024-07-17T19:43:14.557083Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":817,"took":"9.528478ms","hash":3334974253,"current-db-size-bytes":2580480,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2580480,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-17T19:43:14.557176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3334974253,"revision":817,"compact-revision":-1}
	
	
	==> kernel <==
	 19:46:46 up 14 min,  0 users,  load average: 0.17, 0.25, 0.18
	Linux no-preload-713715 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] <==
	E0717 19:43:17.296794       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0717 19:43:17.296847       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 19:43:17.297933       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:43:17.298024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:44:17.298220       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:44:17.298492       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 19:44:17.298227       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:44:17.298614       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 19:44:17.299662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:44:17.299692       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:46:17.300341       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:46:17.300504       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 19:46:17.300656       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:46:17.300715       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 19:46:17.301664       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:46:17.301840       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] <==
	E0717 19:41:24.276152       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:41:24.285631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:41:54.282690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:41:54.292834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:42:24.289138       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:42:24.300727       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:42:54.296094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:42:54.308039       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:24.302869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:43:24.315072       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:54.309219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:43:54.322140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:43:59.792661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-713715"
	I0717 19:44:19.711054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="1.273561ms"
	E0717 19:44:24.316610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:44:24.330126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:44:34.705935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="173.379µs"
	E0717 19:44:54.323678       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:44:54.337644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:45:24.332177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:45:24.347251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:45:54.338805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:45:54.359057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:46:24.345245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:46:24.366985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 19:33:17.553563       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 19:33:17.567810       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.66"]
	E0717 19:33:17.567899       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 19:33:17.608026       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 19:33:17.608075       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:33:17.608146       1 server_linux.go:170] "Using iptables Proxier"
	I0717 19:33:17.611039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 19:33:17.611388       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 19:33:17.611475       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:33:17.613156       1 config.go:197] "Starting service config controller"
	I0717 19:33:17.613189       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:33:17.613215       1 config.go:104] "Starting endpoint slice config controller"
	I0717 19:33:17.613219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:33:17.615271       1 config.go:326] "Starting node config controller"
	I0717 19:33:17.615363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:33:17.713446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:33:17.713514       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:33:17.715529       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] <==
	I0717 19:33:13.332027       1 serving.go:386] Generated self-signed cert in-memory
	W0717 19:33:16.167961       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:33:16.168135       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:33:16.168254       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:33:16.168289       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:33:16.216825       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 19:33:16.219082       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:33:16.226514       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 19:33:16.226796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 19:33:16.226649       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	W0717 19:33:16.290728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:33:16.291953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 19:33:16.291507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:33:16.294567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0717 19:33:16.291524       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:33:16.395595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:44:11 no-preload-713715 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:44:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:44:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:44:19 no-preload-713715 kubelet[1316]: E0717 19:44:19.693986    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:44:34 no-preload-713715 kubelet[1316]: E0717 19:44:34.693713    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:44:47 no-preload-713715 kubelet[1316]: E0717 19:44:47.694999    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:44:58 no-preload-713715 kubelet[1316]: E0717 19:44:58.693269    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:45:09 no-preload-713715 kubelet[1316]: E0717 19:45:09.693130    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:45:11 no-preload-713715 kubelet[1316]: E0717 19:45:11.707264    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:45:11 no-preload-713715 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:45:11 no-preload-713715 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:45:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:45:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:45:24 no-preload-713715 kubelet[1316]: E0717 19:45:24.692723    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:45:38 no-preload-713715 kubelet[1316]: E0717 19:45:38.692994    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:45:51 no-preload-713715 kubelet[1316]: E0717 19:45:51.697468    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:46:02 no-preload-713715 kubelet[1316]: E0717 19:46:02.692873    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:46:11 no-preload-713715 kubelet[1316]: E0717 19:46:11.707000    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:46:11 no-preload-713715 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:46:11 no-preload-713715 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:46:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:46:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:46:14 no-preload-713715 kubelet[1316]: E0717 19:46:14.693258    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:46:25 no-preload-713715 kubelet[1316]: E0717 19:46:25.694965    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:46:39 no-preload-713715 kubelet[1316]: E0717 19:46:39.693440    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	
	
	==> storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] <==
	I0717 19:33:17.424181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:33:47.429109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] <==
	I0717 19:33:48.019405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:33:48.032557       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:33:48.032706       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:33:48.040573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:33:48.040809       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7!
	I0717 19:33:48.041799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"482f3537-670c-4054-ac22-126ea9033289", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7 became leader
	I0717 19:33:48.141848       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713715 -n no-preload-713715
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-713715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-q2jgb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb: exit status 1 (63.12245ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-q2jgb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 19:38:52.816391  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:47:35.971617885 +0000 UTC m=+6312.288635524
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-378944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-378944 logs -n 25: (2.196607851s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.650381297Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:747e3c04baaf8be3c1846ab9f9aab6c562fc86babedfd29a9141dc6bce79dff7,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-hvknj,Uid:d214e760-d49e-4554-85c2-77e5da1b150f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245113068529320,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-hvknj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d214e760-d49e-4554-85c2-77e5da1b150f,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:32.761413982Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:153a102e-f07b-46b4-a9d0-9e75
4237ca6e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245113003626391,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T19:38:32.694725167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnwgp,Uid:f86efa81-cbe0-44a7-888f-639af3dc58ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245111531473036,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:31.224329933Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xbtct,Uid:c24ce9ab
-babb-4589-8046-e8e2d4ca68af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245111514028683,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:31.204660220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&PodSandboxMetadata{Name:kube-proxy-vhjq4,Uid:092af79d-ebc0-4e16-97ef-725195e95344,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245110954643559,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:30.641422072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-378944,Uid:14df199c96b83cb67a529e48a55d2c4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090896433816,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.238:2379,kubernetes.io/config.hash: 14df199c96b83cb67a529e48a55d2c4c,kubernetes.io/config.seen: 2024-07-17T19:38:10.435128418Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9a613dfa6983b3c14a990b6c66fb33c37a5
46f230842e06f71d746a484e5d57f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-378944,Uid:9084b0d455367170b4852ba68abb4dc6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090893118486,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.238:8444,kubernetes.io/config.hash: 9084b0d455367170b4852ba68abb4dc6,kubernetes.io/config.seen: 2024-07-17T19:38:10.435135365Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-378944,Uid:b5e71085d4256531f7ac739262d6bfc6,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721245090891433004,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5e71085d4256531f7ac739262d6bfc6,kubernetes.io/config.seen: 2024-07-17T19:38:10.435138137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-378944,Uid:dff9bb6abc876dce8a11c05079b5f227,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090886033070,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dff9bb6abc876dce8a11c05079b5f227,kubernetes.io/config.seen: 2024-07-17T19:38:10.435136957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2b1ee908-cfb8-40bf-aedc-494bfeeb608c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.651321110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b14f8801-8027-42ea-8db1-812f5cee22ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.651487022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b14f8801-8027-42ea-8db1-812f5cee22ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.651681133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b14f8801-8027-42ea-8db1-812f5cee22ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.664929727Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f563c869-d76b-4816-be0a-322c5153acf7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.664997665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f563c869-d76b-4816-be0a-322c5153acf7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.666417199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2e3109e-30eb-4a15-b32d-5dfde87081e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.666939772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245657666909552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2e3109e-30eb-4a15-b32d-5dfde87081e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.667696242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e68ccf4-5780-474f-817d-b2f36d14e5ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.667764697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e68ccf4-5780-474f-817d-b2f36d14e5ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.667969557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e68ccf4-5780-474f-817d-b2f36d14e5ab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.709662184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16428430-4843-4e89-82aa-f76202debaf7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.709787253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16428430-4843-4e89-82aa-f76202debaf7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.711521719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d59c694-173e-429b-8acb-f5cfe92aea12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.711917289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245657711896867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d59c694-173e-429b-8acb-f5cfe92aea12 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.712743578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=658cf887-456e-489a-a8ee-a03afa5fa020 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.712818259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=658cf887-456e-489a-a8ee-a03afa5fa020 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.713013021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=658cf887-456e-489a-a8ee-a03afa5fa020 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.748448034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb310fee-87d4-4f8d-985a-8601e8db1bf9 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.748853262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb310fee-87d4-4f8d-985a-8601e8db1bf9 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.750938517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d324fe4-3bd9-4362-b704-24bd2ed54dcf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.751520753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245657751489884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d324fe4-3bd9-4362-b704-24bd2ed54dcf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.752375076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d99c71c-821a-4fa4-9d83-f323a18ff3b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.752465030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d99c71c-821a-4fa4-9d83-f323a18ff3b4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:47:37 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:47:37.752732989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d99c71c-821a-4fa4-9d83-f323a18ff3b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4ba7515d592d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5c849fbf37d24       storage-provisioner
	24d47e2333311       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9cb00855ffe2b       coredns-7db6d8ff4d-xbtct
	7bb1692aa3f9e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   921fbf5ac6336       coredns-7db6d8ff4d-jnwgp
	4dcae6f21a0ff       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   bf4ce38f92880       kube-proxy-vhjq4
	ab2378f4ea657       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   20a0dcbc6c82a       etcd-default-k8s-diff-port-378944
	103abd0d3d14d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   61ecd10ea0f79       kube-scheduler-default-k8s-diff-port-378944
	cc51d24cdcb7f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   99bcefef6fff7       kube-controller-manager-default-k8s-diff-port-378944
	14b818b853547       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   9a613dfa6983b       kube-apiserver-default-k8s-diff-port-378944
	
	
	==> coredns [24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-378944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-378944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=default-k8s-diff-port-378944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-378944
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:47:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:43:43 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:43:43 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:43:43 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:43:43 +0000   Wed, 17 Jul 2024 19:38:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.238
	  Hostname:    default-k8s-diff-port-378944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a42b743f7394d0994c7a7306917821b
	  System UUID:                4a42b743-f739-4d09-94c7-a7306917821b
	  Boot ID:                    a7d2dfb6-f1fc-4381-96be-ccbe07d367bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jnwgp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-xbtct                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-378944                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-378944             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-378944    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-vhjq4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-378944             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-hvknj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-378944 event: Registered Node default-k8s-diff-port-378944 in Controller
	
	
	==> dmesg <==
	[  +0.039771] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:33] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.286969] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619288] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.586140] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.055076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067520] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.204277] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.146874] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.328398] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.557310] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.062630] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.987226] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.530871] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320479] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.022456] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 19:38] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.190238] systemd-fstab-generator[3596]: Ignoring "noauto" option for root device
	[  +4.785500] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.787293] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +14.743459] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.191358] systemd-fstab-generator[4183]: Ignoring "noauto" option for root device
	[Jul17 19:39] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1] <==
	{"level":"info","ts":"2024-07-17T19:38:11.603825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee switched to configuration voters=(10681706863129809134)"}
	{"level":"info","ts":"2024-07-17T19:38:11.621973Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T19:38:11.622147Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e65e2bd58fd0c13a","local-member-id":"943d0bcc43b450ee","added-peer-id":"943d0bcc43b450ee","added-peer-peer-urls":["https://192.168.50.238:2380"]}
	{"level":"info","ts":"2024-07-17T19:38:11.622583Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"943d0bcc43b450ee","initial-advertise-peer-urls":["https://192.168.50.238:2380"],"listen-peer-urls":["https://192.168.50.238:2380"],"advertise-client-urls":["https://192.168.50.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T19:38:11.622781Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T19:38:11.626578Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.238:2380"}
	{"level":"info","ts":"2024-07-17T19:38:11.626666Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.238:2380"}
	{"level":"info","ts":"2024-07-17T19:38:12.555289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:12.555445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:12.555485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee received MsgPreVoteResp from 943d0bcc43b450ee at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:12.555516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:12.55554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee received MsgVoteResp from 943d0bcc43b450ee at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:12.555567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"943d0bcc43b450ee became leader at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:12.555593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 943d0bcc43b450ee elected leader 943d0bcc43b450ee at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:12.55944Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"943d0bcc43b450ee","local-member-attributes":"{Name:default-k8s-diff-port-378944 ClientURLs:[https://192.168.50.238:2379]}","request-path":"/0/members/943d0bcc43b450ee/attributes","cluster-id":"e65e2bd58fd0c13a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T19:38:12.560266Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:12.560349Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:12.560721Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:12.564379Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:12.564439Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:12.566135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T19:38:12.568541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.238:2379"}
	{"level":"info","ts":"2024-07-17T19:38:12.568641Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e65e2bd58fd0c13a","local-member-id":"943d0bcc43b450ee","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:12.568719Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:12.568763Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:47:38 up 14 min,  0 users,  load average: 0.14, 0.26, 0.21
	Linux default-k8s-diff-port-378944 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293] <==
	I0717 19:41:33.593366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:43:14.047613       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:14.047745       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:43:15.048827       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:15.048908       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:43:15.048917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:43:15.048845       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:15.049027       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:43:15.050072       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:44:15.049487       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:44:15.049670       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:44:15.049680       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:44:15.050424       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:44:15.050462       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:44:15.051612       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:46:15.050589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:46:15.050947       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:46:15.050983       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:46:15.052757       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:46:15.052807       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:46:15.052815       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce] <==
	I0717 19:42:00.726778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:42:30.277065       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:42:30.738453       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:00.283139       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:43:00.746774       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:30.289530       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:43:30.758641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:44:00.294149       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:44:00.766746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:44:30.299898       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:44:30.774905       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:44:35.572787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.292226ms"
	I0717 19:44:46.568536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="99.608µs"
	E0717 19:45:00.305068       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:45:00.783518       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:45:30.311042       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:45:30.791120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:46:00.319100       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:46:00.799255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:46:30.324648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:46:30.806987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:47:00.330852       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:47:00.817365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:47:30.336414       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:47:30.825604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe] <==
	I0717 19:38:31.509339       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:38:31.525984       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.238"]
	I0717 19:38:31.577316       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 19:38:31.577368       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:38:31.577440       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:38:31.588554       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:38:31.588757       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:38:31.588769       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:38:31.590694       1 config.go:192] "Starting service config controller"
	I0717 19:38:31.590728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:38:31.590751       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:38:31.590754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:38:31.591328       1 config.go:319] "Starting node config controller"
	I0717 19:38:31.591354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:38:31.691374       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:38:31.691444       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:38:31.691661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006] <==
	W0717 19:38:14.089944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:14.089999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.090027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.089985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:14.939127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.939180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:15.033872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:38:15.033916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 19:38:15.048767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:38:15.048816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:38:15.130087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:38:15.130139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:15.153539       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:38:15.153584       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:38:15.176961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:15.177007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:15.210548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 19:38:15.210596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 19:38:15.243304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:15.243357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 19:38:15.326176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:15.326277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:15.337720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:38:15.337812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 19:38:17.682470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:45:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:45:16.569769    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:45:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:45:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:45:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:45:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:45:23 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:45:23.552593    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:45:34 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:45:34.554018    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:45:48 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:45:48.553103    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:46:03 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:03.552251    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:46:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:16.570633    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:46:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:46:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:46:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:46:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:46:18 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:18.554481    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:46:29 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:29.552615    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:46:43 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:43.552797    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:46:57 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:46:57.552019    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:47:10 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:47:10.552864    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:47:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:47:16.569711    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:47:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:47:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:47:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:47:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:47:25 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:47:25.553136    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	
	
	==> storage-provisioner [e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a] <==
	I0717 19:38:33.238314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:38:33.247911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:38:33.247982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:38:33.271003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:38:33.274573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24!
	I0717 19:38:33.272401       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3646205-7ea5-44df-80a4-502f2d564366", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24 became leader
	I0717 19:38:33.375461       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hvknj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj: exit status 1 (66.825132ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hvknj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 19:39:47.588897  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:40:05.951231  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:40:16.145071  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 19:41:02.804108  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:41:10.635466  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:41:35.251721  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637675 -n embed-certs-637675
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:48:16.971364241 +0000 UTC m=+6353.288381881
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-637675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-637675 logs -n 25: (2.114074843s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.524566894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245698524543591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=958cd1ae-48c6-4ca9-afd4-4d0b1134914e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.525130654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df5ab6a9-4aec-4001-8d71-fbbbca7f033b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.525191928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df5ab6a9-4aec-4001-8d71-fbbbca7f033b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.525374944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df5ab6a9-4aec-4001-8d71-fbbbca7f033b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.564060445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce9474de-aede-4338-b113-ea59cfbf5416 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.564416211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce9474de-aede-4338-b113-ea59cfbf5416 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.566958723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bb593d2-943e-4edc-a38c-41daa9750345 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.567512750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245698567486167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bb593d2-943e-4edc-a38c-41daa9750345 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.568384632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b92769f7-38d5-43c8-9f23-56c85ea43141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.568566811Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b92769f7-38d5-43c8-9f23-56c85ea43141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.568890005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b92769f7-38d5-43c8-9f23-56c85ea43141 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.613533373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e24167c-8d04-4e5f-a6cb-4eb82f43bc72 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.613673568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e24167c-8d04-4e5f-a6cb-4eb82f43bc72 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.614832906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=310a42dd-1aa4-4171-b52f-df4470723c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.615236946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245698615216476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=310a42dd-1aa4-4171-b52f-df4470723c65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.616110830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=feff79b4-0666-4f82-a0d0-e0b9d6e3b788 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.616188390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=feff79b4-0666-4f82-a0d0-e0b9d6e3b788 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.616361948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=feff79b4-0666-4f82-a0d0-e0b9d6e3b788 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.662487328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cf79d0b-355f-4423-bfeb-8e3331fd2acf name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.662679333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cf79d0b-355f-4423-bfeb-8e3331fd2acf name=/runtime.v1.RuntimeService/Version
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.664135327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25e79f84-b278-40de-aa22-6cd1a3c1c3c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.664632987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245698664560660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25e79f84-b278-40de-aa22-6cd1a3c1c3c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.665107587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3032ad0c-9cea-4656-a35c-5abec451e443 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.665182618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3032ad0c-9cea-4656-a35c-5abec451e443 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:48:18 embed-certs-637675 crio[728]: time="2024-07-17 19:48:18.665357549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3032ad0c-9cea-4656-a35c-5abec451e443 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	48e5a7e0f2ab7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   728b051abda92       storage-provisioner
	24336c4ef3828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7fdb130b2f33b       coredns-7db6d8ff4d-45xn7
	b1c225d15e2f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3d0f83a962a14       coredns-7db6d8ff4d-nw8g8
	4b02cd67a4200       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   5c2d964094f6f       kube-proxy-dns5j
	0045a361a96bb       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   aa8ba3819e3b6       kube-scheduler-embed-certs-637675
	b541216eac8f9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   1cb88fe353ad5       etcd-embed-certs-637675
	4087023e0c078       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   6d283b689c2cd       kube-apiserver-embed-certs-637675
	e719935cefb56       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   dd4bfd6e5cf1b       kube-controller-manager-embed-certs-637675
	
	
	==> coredns [24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-637675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-637675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=embed-certs-637675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:38:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-637675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:48:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:44:24 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:44:24 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:44:24 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:44:24 +0000   Wed, 17 Jul 2024 19:38:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    embed-certs-637675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc27d91064f433ea9e3ab8310569cdd
	  System UUID:                fbc27d91-064f-433e-a9e3-ab8310569cdd
	  Boot ID:                    460442a8-053d-4618-a237-37e320ba92e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-45xn7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-nw8g8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-637675                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-637675             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-embed-certs-637675    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-proxy-dns5j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-637675             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-jf42d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-637675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node embed-certs-637675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-637675 event: Registered Node embed-certs-637675 in Controller
	
	
	==> dmesg <==
	[  +0.045243] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.003931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.421170] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603500] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.236915] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.066877] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057944] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.196685] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.125991] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.340304] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.283972] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.061966] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.899535] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[Jul17 19:34] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.299545] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.636813] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 19:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.636497] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +4.587257] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.477763] systemd-fstab-generator[3911]: Ignoring "noauto" option for root device
	[Jul17 19:39] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.404918] systemd-fstab-generator[4236]: Ignoring "noauto" option for root device
	[Jul17 19:40] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75] <==
	{"level":"info","ts":"2024-07-17T19:38:52.612669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2024-07-17T19:38:52.612892Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-07-17T19:38:52.622045Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-17T19:38:52.622087Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-07-17T19:38:52.621986Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T19:38:52.630043Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T19:38:52.630099Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T19:38:52.863682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:52.863814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:52.863908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 1"}
	{"level":"info","ts":"2024-07-17T19:38:52.863967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.863992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.864072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.864097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.868884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:embed-certs-637675 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T19:38:52.868992Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:52.869386Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.870733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:52.871026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:52.871057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:52.872502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-07-17T19:38:52.875256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.875346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.880665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T19:38:52.880973Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:48:19 up 14 min,  0 users,  load average: 0.17, 0.17, 0.11
	Linux embed-certs-637675 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d] <==
	I0717 19:42:14.829766       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:43:54.888503       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:54.888972       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:43:55.889446       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:55.889511       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:43:55.889519       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:43:55.889574       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:43:55.889700       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:43:55.891034       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:44:55.890030       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:44:55.890134       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:44:55.890149       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:44:55.891234       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:44:55.891373       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:44:55.891415       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:46:55.891271       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:46:55.891566       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:46:55.891726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:46:55.891778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:46:55.891864       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:46:55.893045       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de] <==
	I0717 19:42:41.604098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:11.158820       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:43:11.612743       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:43:41.164464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:43:41.620563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:44:11.172899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:44:11.628462       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:44:41.177763       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:44:41.637307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:45:11.185523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:45:11.653906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:45:16.364158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="460.657µs"
	I0717 19:45:27.361376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="663.123µs"
	E0717 19:45:41.191120       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:45:41.663719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:46:11.196387       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:46:11.672899       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:46:41.201357       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:46:41.682105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:47:11.208990       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:47:11.694006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:47:41.214674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:47:41.701541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:48:11.219386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:48:11.710315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c] <==
	I0717 19:39:12.493966       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:39:12.517203       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0717 19:39:12.601582       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 19:39:12.601676       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:39:12.601691       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:39:12.612778       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:39:12.613005       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:39:12.613033       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:39:12.615127       1 config.go:192] "Starting service config controller"
	I0717 19:39:12.615169       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:39:12.615199       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:39:12.615203       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:39:12.615933       1 config.go:319] "Starting node config controller"
	I0717 19:39:12.615961       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:39:12.715442       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:39:12.715504       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:39:12.716010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169] <==
	W0717 19:38:54.947851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:54.947880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 19:38:54.947996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:54.948157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 19:38:54.948531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:54.948567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:54.948655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:38:54.948690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:38:54.950214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:38:54.950328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:38:54.950368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:54.950798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 19:38:54.950916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:54.951531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:55.802150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:55.802347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:55.821626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:55.821764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:55.834315       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:38:55.834363       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:38:55.975562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:55.975666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 19:38:56.104982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:56.105082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 19:38:57.638502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:45:57 embed-certs-637675 kubelet[3918]: E0717 19:45:57.372831    3918 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:45:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:45:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:45:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:45:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:46:04 embed-certs-637675 kubelet[3918]: E0717 19:46:04.344652    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:46:18 embed-certs-637675 kubelet[3918]: E0717 19:46:18.343934    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:46:33 embed-certs-637675 kubelet[3918]: E0717 19:46:33.345837    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:46:49 embed-certs-637675 kubelet[3918]: E0717 19:46:49.344839    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:46:57 embed-certs-637675 kubelet[3918]: E0717 19:46:57.370827    3918 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:46:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:46:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:46:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:46:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:47:02 embed-certs-637675 kubelet[3918]: E0717 19:47:02.344988    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:47:17 embed-certs-637675 kubelet[3918]: E0717 19:47:17.345034    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:47:31 embed-certs-637675 kubelet[3918]: E0717 19:47:31.344822    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:47:45 embed-certs-637675 kubelet[3918]: E0717 19:47:45.344777    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:47:57 embed-certs-637675 kubelet[3918]: E0717 19:47:57.370945    3918 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:47:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:47:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:47:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:47:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:48:00 embed-certs-637675 kubelet[3918]: E0717 19:48:00.344720    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:48:11 embed-certs-637675 kubelet[3918]: E0717 19:48:11.345750    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	
	
	==> storage-provisioner [48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c] <==
	I0717 19:39:14.584671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:39:14.601108       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:39:14.601159       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:39:14.610567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:39:14.611551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e!
	I0717 19:39:14.611485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df6d8ef0-41fb-440c-add2-488fbe8a8536", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e became leader
	I0717 19:39:14.712777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637675 -n embed-certs-637675
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-637675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jf42d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d: exit status 1 (67.254468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jf42d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:41:56.647464  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:42:13.090852  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:42:25.850739  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:42:32.425133  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:42:49.704539  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:42:58.297468  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:43:19.691481  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:43:52.816473  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:43:55.471848  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:44:12.749770  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:44:47.588797  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:45:05.951286  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:45:15.861199  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:46:02.804182  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:46:35.251794  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:46:56.646870  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:47:13.090984  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:47:32.424886  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:47:49.704652  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:48:09.000303  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:48:52.816473  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:49:47.588890  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:50:05.951197  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (235.923079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-998147" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (231.807418ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25: (1.852522636s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.180676707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245850180646976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61773bc0-842a-4c32-b2e0-f9b1366ad5a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.181287869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=783ee6e5-67ab-406c-9068-105c21f1b2ee name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.181360812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=783ee6e5-67ab-406c-9068-105c21f1b2ee name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.181399790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=783ee6e5-67ab-406c-9068-105c21f1b2ee name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.217170725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73cc9644-719f-491a-ac6d-2406c24944a4 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.217272425Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73cc9644-719f-491a-ac6d-2406c24944a4 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.229509816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80694b96-9e38-4e10-8ac9-99519f049fe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.230202432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245850230163562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80694b96-9e38-4e10-8ac9-99519f049fe6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.231063505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=992f5fe2-b2fb-49b4-866c-2d54f659cdfc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.231140832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=992f5fe2-b2fb-49b4-866c-2d54f659cdfc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.231191142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=992f5fe2-b2fb-49b4-866c-2d54f659cdfc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.268600307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff917cd0-812d-4897-aad5-7e6100235373 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.268683503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff917cd0-812d-4897-aad5-7e6100235373 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.269623028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d08f8d8-c6fe-428b-9e2b-9e058ee3b4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.270065786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245850270036561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d08f8d8-c6fe-428b-9e2b-9e058ee3b4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.270721787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a8620c6-80a6-417c-896e-3d684e482299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.270776953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a8620c6-80a6-417c-896e-3d684e482299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.270817224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a8620c6-80a6-417c-896e-3d684e482299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.304687377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d27f0d82-854f-4b3c-8896-bc391c7c5b60 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.304784868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d27f0d82-854f-4b3c-8896-bc391c7c5b60 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.305896706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be48e838-6a4a-409c-b9ff-4ea9cb1fd31b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.306467332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245850306417769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be48e838-6a4a-409c-b9ff-4ea9cb1fd31b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.307172905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1ac7852-b9be-4d1f-8646-cc7f9e9d2abd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.307242064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1ac7852-b9be-4d1f-8646-cc7f9e9d2abd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:50:50 old-k8s-version-998147 crio[650]: time="2024-07-17 19:50:50.307280108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a1ac7852-b9be-4d1f-8646-cc7f9e9d2abd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 19:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052125] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.749399] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.651884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.750489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.317708] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.064289] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056621] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.217924] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129076] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.259232] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.636882] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063978] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.692971] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[ +13.037868] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 19:37] systemd-fstab-generator[5048]: Ignoring "noauto" option for root device
	[Jul17 19:39] systemd-fstab-generator[5324]: Ignoring "noauto" option for root device
	[  +0.060287] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:50:50 up 17 min,  0 users,  load average: 0.03, 0.03, 0.03
	Linux old-k8s-version-998147 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc0002a5950, 0xc000c8a9f0, 0x23, 0xc0002d1440)
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: created by internal/singleflight.(*Group).DoChan
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: goroutine 164 [runnable]:
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: net._C2func_getaddrinfo(0xc000c5a320, 0x0, 0xc000c9d260, 0xc0001224c8, 0x0, 0x0, 0x0)
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         _cgo_gotypes.go:94 +0x55
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: net.cgoLookupIPCNAME.func1(0xc000c5a320, 0x20, 0x20, 0xc000c9d260, 0xc0001224c8, 0x4e4a5a0, 0xc000cb66a0, 0x57a492)
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000c8a9c0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: net.cgoIPLookup(0xc000e86f00, 0x48ab5d6, 0x3, 0xc000c8a9c0, 0x1f)
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]: created by net.cgoLookupIP
	Jul 17 19:50:49 old-k8s-version-998147 kubelet[6501]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jul 17 19:50:49 old-k8s-version-998147 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 19:50:49 old-k8s-version-998147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 19:50:50 old-k8s-version-998147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 115.
	Jul 17 19:50:50 old-k8s-version-998147 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 19:50:50 old-k8s-version-998147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 19:50:50 old-k8s-version-998147 kubelet[6582]: I0717 19:50:50.510671    6582 server.go:416] Version: v1.20.0
	Jul 17 19:50:50 old-k8s-version-998147 kubelet[6582]: I0717 19:50:50.511027    6582 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 19:50:50 old-k8s-version-998147 kubelet[6582]: I0717 19:50:50.512772    6582 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 19:50:50 old-k8s-version-998147 kubelet[6582]: W0717 19:50:50.513641    6582 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 19:50:50 old-k8s-version-998147 kubelet[6582]: I0717 19:50:50.514033    6582 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (223.583547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-998147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (382.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713715 -n no-preload-713715
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:53:09.600921097 +0000 UTC m=+6645.917938752
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-713715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-713715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.514µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-713715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-713715 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-713715 logs -n 25: (1.352846167s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC | 17 Jul 24 19:52 UTC |
	| start   | -p newest-cni-500710 --memory=2200 --alsologtostderr   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:52:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:52:34.767774  465898 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:52:34.767999  465898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:52:34.768007  465898 out.go:304] Setting ErrFile to fd 2...
	I0717 19:52:34.768010  465898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:52:34.768198  465898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:52:34.768893  465898 out.go:298] Setting JSON to false
	I0717 19:52:34.770004  465898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12898,"bootTime":1721233057,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:52:34.770072  465898 start.go:139] virtualization: kvm guest
	I0717 19:52:34.772405  465898 out.go:177] * [newest-cni-500710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:52:34.773780  465898 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:52:34.773788  465898 notify.go:220] Checking for updates...
	I0717 19:52:34.776366  465898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:52:34.777750  465898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:52:34.779043  465898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:34.780277  465898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:52:34.781589  465898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:52:34.783352  465898 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:52:34.783466  465898 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:52:34.783580  465898 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:52:34.783697  465898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:52:34.821607  465898 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:52:34.822903  465898 start.go:297] selected driver: kvm2
	I0717 19:52:34.822927  465898 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:52:34.822940  465898 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:52:34.823612  465898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:52:34.823719  465898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:52:34.839535  465898 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:52:34.839582  465898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 19:52:34.839615  465898 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 19:52:34.839861  465898 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:52:34.839923  465898 cni.go:84] Creating CNI manager for ""
	I0717 19:52:34.839942  465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:52:34.839959  465898 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:52:34.840050  465898 start.go:340] cluster config:
	{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:52:34.840161  465898 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:52:34.842355  465898 out.go:177] * Starting "newest-cni-500710" primary control-plane node in "newest-cni-500710" cluster
	I0717 19:52:34.843725  465898 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:52:34.843767  465898 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:52:34.843779  465898 cache.go:56] Caching tarball of preloaded images
	I0717 19:52:34.843902  465898 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:52:34.843933  465898 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:52:34.844059  465898 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:52:34.844100  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json: {Name:mk20dfee504dbf17cdf63c89bd6f3d65ee6f5a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:52:34.844341  465898 start.go:360] acquireMachinesLock for newest-cni-500710: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:52:34.844376  465898 start.go:364] duration metric: took 19.479µs to acquireMachinesLock for "newest-cni-500710"
	I0717 19:52:34.844396  465898 start.go:93] Provisioning new machine with config: &{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:52:34.844456  465898 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:52:34.846183  465898 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:52:34.846332  465898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:52:34.846366  465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:52:34.861330  465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0717 19:52:34.861788  465898 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:52:34.862363  465898 main.go:141] libmachine: Using API Version  1
	I0717 19:52:34.862389  465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:52:34.862698  465898 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:52:34.862930  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:34.863105  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:34.863254  465898 start.go:159] libmachine.API.Create for "newest-cni-500710" (driver="kvm2")
	I0717 19:52:34.863282  465898 client.go:168] LocalClient.Create starting
	I0717 19:52:34.863321  465898 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 19:52:34.863361  465898 main.go:141] libmachine: Decoding PEM data...
	I0717 19:52:34.863383  465898 main.go:141] libmachine: Parsing certificate...
	I0717 19:52:34.863458  465898 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 19:52:34.863490  465898 main.go:141] libmachine: Decoding PEM data...
	I0717 19:52:34.863509  465898 main.go:141] libmachine: Parsing certificate...
	I0717 19:52:34.863532  465898 main.go:141] libmachine: Running pre-create checks...
	I0717 19:52:34.863547  465898 main.go:141] libmachine: (newest-cni-500710) Calling .PreCreateCheck
	I0717 19:52:34.863919  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:34.864367  465898 main.go:141] libmachine: Creating machine...
	I0717 19:52:34.864386  465898 main.go:141] libmachine: (newest-cni-500710) Calling .Create
	I0717 19:52:34.864526  465898 main.go:141] libmachine: (newest-cni-500710) Creating KVM machine...
	I0717 19:52:34.865881  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found existing default KVM network
	I0717 19:52:34.867212  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.867059  465921 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:23:3c:87} reservation:<nil>}
	I0717 19:52:34.868157  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.868074  465921 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:86:02} reservation:<nil>}
	I0717 19:52:34.868961  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.868900  465921 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b5:5a:39} reservation:<nil>}
	I0717 19:52:34.870102  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.870025  465921 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5760}
	I0717 19:52:34.870140  465898 main.go:141] libmachine: (newest-cni-500710) DBG | created network xml: 
	I0717 19:52:34.870155  465898 main.go:141] libmachine: (newest-cni-500710) DBG | <network>
	I0717 19:52:34.870166  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <name>mk-newest-cni-500710</name>
	I0717 19:52:34.870176  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <dns enable='no'/>
	I0717 19:52:34.870206  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   
	I0717 19:52:34.870227  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 19:52:34.870246  465898 main.go:141] libmachine: (newest-cni-500710) DBG |     <dhcp>
	I0717 19:52:34.870258  465898 main.go:141] libmachine: (newest-cni-500710) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 19:52:34.870268  465898 main.go:141] libmachine: (newest-cni-500710) DBG |     </dhcp>
	I0717 19:52:34.870275  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   </ip>
	I0717 19:52:34.870283  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   
	I0717 19:52:34.870291  465898 main.go:141] libmachine: (newest-cni-500710) DBG | </network>
	I0717 19:52:34.870301  465898 main.go:141] libmachine: (newest-cni-500710) DBG | 
	I0717 19:52:34.875809  465898 main.go:141] libmachine: (newest-cni-500710) DBG | trying to create private KVM network mk-newest-cni-500710 192.168.72.0/24...
	I0717 19:52:34.949864  465898 main.go:141] libmachine: (newest-cni-500710) DBG | private KVM network mk-newest-cni-500710 192.168.72.0/24 created
	I0717 19:52:34.949914  465898 main.go:141] libmachine: (newest-cni-500710) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 ...
	I0717 19:52:34.949940  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.949850  465921 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:34.949952  465898 main.go:141] libmachine: (newest-cni-500710) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:52:34.950054  465898 main.go:141] libmachine: (newest-cni-500710) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:52:35.243341  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.243154  465921 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa...
	I0717 19:52:35.501920  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.501762  465921 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/newest-cni-500710.rawdisk...
	I0717 19:52:35.501957  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Writing magic tar header
	I0717 19:52:35.501995  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Writing SSH key tar header
	I0717 19:52:35.502016  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.501914  465921 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 ...
	I0717 19:52:35.502034  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 (perms=drwx------)
	I0717 19:52:35.502056  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710
	I0717 19:52:35.502072  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 19:52:35.502081  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:35.502092  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:52:35.502107  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 19:52:35.502121  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 19:52:35.502151  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:52:35.502177  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 19:52:35.502190  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:52:35.502202  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home
	I0717 19:52:35.502213  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Skipping /home - not owner
	I0717 19:52:35.502232  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:52:35.502242  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:52:35.502249  465898 main.go:141] libmachine: (newest-cni-500710) Creating domain...
	I0717 19:52:35.503265  465898 main.go:141] libmachine: (newest-cni-500710) define libvirt domain using xml: 
	I0717 19:52:35.503287  465898 main.go:141] libmachine: (newest-cni-500710) <domain type='kvm'>
	I0717 19:52:35.503295  465898 main.go:141] libmachine: (newest-cni-500710)   <name>newest-cni-500710</name>
	I0717 19:52:35.503300  465898 main.go:141] libmachine: (newest-cni-500710)   <memory unit='MiB'>2200</memory>
	I0717 19:52:35.503305  465898 main.go:141] libmachine: (newest-cni-500710)   <vcpu>2</vcpu>
	I0717 19:52:35.503310  465898 main.go:141] libmachine: (newest-cni-500710)   <features>
	I0717 19:52:35.503316  465898 main.go:141] libmachine: (newest-cni-500710)     <acpi/>
	I0717 19:52:35.503323  465898 main.go:141] libmachine: (newest-cni-500710)     <apic/>
	I0717 19:52:35.503331  465898 main.go:141] libmachine: (newest-cni-500710)     <pae/>
	I0717 19:52:35.503341  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503350  465898 main.go:141] libmachine: (newest-cni-500710)   </features>
	I0717 19:52:35.503360  465898 main.go:141] libmachine: (newest-cni-500710)   <cpu mode='host-passthrough'>
	I0717 19:52:35.503371  465898 main.go:141] libmachine: (newest-cni-500710)   
	I0717 19:52:35.503380  465898 main.go:141] libmachine: (newest-cni-500710)   </cpu>
	I0717 19:52:35.503416  465898 main.go:141] libmachine: (newest-cni-500710)   <os>
	I0717 19:52:35.503439  465898 main.go:141] libmachine: (newest-cni-500710)     <type>hvm</type>
	I0717 19:52:35.503449  465898 main.go:141] libmachine: (newest-cni-500710)     <boot dev='cdrom'/>
	I0717 19:52:35.503459  465898 main.go:141] libmachine: (newest-cni-500710)     <boot dev='hd'/>
	I0717 19:52:35.503473  465898 main.go:141] libmachine: (newest-cni-500710)     <bootmenu enable='no'/>
	I0717 19:52:35.503483  465898 main.go:141] libmachine: (newest-cni-500710)   </os>
	I0717 19:52:35.503491  465898 main.go:141] libmachine: (newest-cni-500710)   <devices>
	I0717 19:52:35.503536  465898 main.go:141] libmachine: (newest-cni-500710)     <disk type='file' device='cdrom'>
	I0717 19:52:35.503560  465898 main.go:141] libmachine: (newest-cni-500710)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/boot2docker.iso'/>
	I0717 19:52:35.503574  465898 main.go:141] libmachine: (newest-cni-500710)       <target dev='hdc' bus='scsi'/>
	I0717 19:52:35.503611  465898 main.go:141] libmachine: (newest-cni-500710)       <readonly/>
	I0717 19:52:35.503623  465898 main.go:141] libmachine: (newest-cni-500710)     </disk>
	I0717 19:52:35.503635  465898 main.go:141] libmachine: (newest-cni-500710)     <disk type='file' device='disk'>
	I0717 19:52:35.503647  465898 main.go:141] libmachine: (newest-cni-500710)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:52:35.503663  465898 main.go:141] libmachine: (newest-cni-500710)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/newest-cni-500710.rawdisk'/>
	I0717 19:52:35.503676  465898 main.go:141] libmachine: (newest-cni-500710)       <target dev='hda' bus='virtio'/>
	I0717 19:52:35.503686  465898 main.go:141] libmachine: (newest-cni-500710)     </disk>
	I0717 19:52:35.503695  465898 main.go:141] libmachine: (newest-cni-500710)     <interface type='network'>
	I0717 19:52:35.503710  465898 main.go:141] libmachine: (newest-cni-500710)       <source network='mk-newest-cni-500710'/>
	I0717 19:52:35.503721  465898 main.go:141] libmachine: (newest-cni-500710)       <model type='virtio'/>
	I0717 19:52:35.503732  465898 main.go:141] libmachine: (newest-cni-500710)     </interface>
	I0717 19:52:35.503744  465898 main.go:141] libmachine: (newest-cni-500710)     <interface type='network'>
	I0717 19:52:35.503756  465898 main.go:141] libmachine: (newest-cni-500710)       <source network='default'/>
	I0717 19:52:35.503786  465898 main.go:141] libmachine: (newest-cni-500710)       <model type='virtio'/>
	I0717 19:52:35.503809  465898 main.go:141] libmachine: (newest-cni-500710)     </interface>
	I0717 19:52:35.503820  465898 main.go:141] libmachine: (newest-cni-500710)     <serial type='pty'>
	I0717 19:52:35.503829  465898 main.go:141] libmachine: (newest-cni-500710)       <target port='0'/>
	I0717 19:52:35.503834  465898 main.go:141] libmachine: (newest-cni-500710)     </serial>
	I0717 19:52:35.503843  465898 main.go:141] libmachine: (newest-cni-500710)     <console type='pty'>
	I0717 19:52:35.503853  465898 main.go:141] libmachine: (newest-cni-500710)       <target type='serial' port='0'/>
	I0717 19:52:35.503863  465898 main.go:141] libmachine: (newest-cni-500710)     </console>
	I0717 19:52:35.503875  465898 main.go:141] libmachine: (newest-cni-500710)     <rng model='virtio'>
	I0717 19:52:35.503885  465898 main.go:141] libmachine: (newest-cni-500710)       <backend model='random'>/dev/random</backend>
	I0717 19:52:35.503905  465898 main.go:141] libmachine: (newest-cni-500710)     </rng>
	I0717 19:52:35.503922  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503934  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503939  465898 main.go:141] libmachine: (newest-cni-500710)   </devices>
	I0717 19:52:35.503945  465898 main.go:141] libmachine: (newest-cni-500710) </domain>
	I0717 19:52:35.503952  465898 main.go:141] libmachine: (newest-cni-500710) 
	I0717 19:52:35.508989  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:49:c2:ce in network default
	I0717 19:52:35.509758  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring networks are active...
	I0717 19:52:35.509784  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:35.510660  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring network default is active
	I0717 19:52:35.510963  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring network mk-newest-cni-500710 is active
	I0717 19:52:35.511522  465898 main.go:141] libmachine: (newest-cni-500710) Getting domain xml...
	I0717 19:52:35.512137  465898 main.go:141] libmachine: (newest-cni-500710) Creating domain...
	I0717 19:52:36.777503  465898 main.go:141] libmachine: (newest-cni-500710) Waiting to get IP...
	I0717 19:52:36.778265  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:36.778652  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:36.778696  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:36.778626  465921 retry.go:31] will retry after 214.377066ms: waiting for machine to come up
	I0717 19:52:36.995120  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:36.995606  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:36.995659  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:36.995566  465921 retry.go:31] will retry after 343.353816ms: waiting for machine to come up
	I0717 19:52:37.340150  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:37.340665  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:37.340699  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:37.340606  465921 retry.go:31] will retry after 375.581243ms: waiting for machine to come up
	I0717 19:52:37.717883  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:37.718421  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:37.718452  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:37.718362  465921 retry.go:31] will retry after 549.702915ms: waiting for machine to come up
	I0717 19:52:38.270051  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:38.270510  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:38.270539  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:38.270466  465921 retry.go:31] will retry after 696.630007ms: waiting for machine to come up
	I0717 19:52:38.968153  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:38.968606  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:38.968670  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:38.968553  465921 retry.go:31] will retry after 729.435483ms: waiting for machine to come up
	I0717 19:52:39.699220  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:39.699796  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:39.699827  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:39.699743  465921 retry.go:31] will retry after 1.069404688s: waiting for machine to come up
	I0717 19:52:40.770329  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:40.770733  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:40.770776  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:40.770684  465921 retry.go:31] will retry after 1.324069044s: waiting for machine to come up
	I0717 19:52:42.097255  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:42.097697  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:42.097730  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:42.097643  465921 retry.go:31] will retry after 1.572231128s: waiting for machine to come up
	I0717 19:52:43.671924  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:43.672506  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:43.672563  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:43.672438  465921 retry.go:31] will retry after 2.283478143s: waiting for machine to come up
	I0717 19:52:45.957637  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:45.958153  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:45.958175  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:45.958081  465921 retry.go:31] will retry after 2.813092288s: waiting for machine to come up
	I0717 19:52:48.775078  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:48.775586  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:48.775613  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:48.775531  465921 retry.go:31] will retry after 2.367550426s: waiting for machine to come up
	I0717 19:52:51.144282  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:51.144844  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:51.144877  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:51.144765  465921 retry.go:31] will retry after 3.518690572s: waiting for machine to come up
	I0717 19:52:54.666084  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.666573  465898 main.go:141] libmachine: (newest-cni-500710) Found IP for machine: 192.168.72.104
	I0717 19:52:54.666624  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has current primary IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.666631  465898 main.go:141] libmachine: (newest-cni-500710) Reserving static IP address...
	I0717 19:52:54.666909  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find host DHCP lease matching {name: "newest-cni-500710", mac: "52:54:00:9b:88:f9", ip: "192.168.72.104"} in network mk-newest-cni-500710
	I0717 19:52:54.745521  465898 main.go:141] libmachine: (newest-cni-500710) Reserved static IP address: 192.168.72.104
	I0717 19:52:54.745559  465898 main.go:141] libmachine: (newest-cni-500710) Waiting for SSH to be available...
	I0717 19:52:54.745569  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Getting to WaitForSSH function...
	I0717 19:52:54.748420  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.748703  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710
	I0717 19:52:54.748741  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find defined IP address of network mk-newest-cni-500710 interface with MAC address 52:54:00:9b:88:f9
	I0717 19:52:54.748890  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH client type: external
	I0717 19:52:54.748916  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa (-rw-------)
	I0717 19:52:54.748965  465898 main.go:141] libmachine: (newest-cni-500710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:52:54.748980  465898 main.go:141] libmachine: (newest-cni-500710) DBG | About to run SSH command:
	I0717 19:52:54.749019  465898 main.go:141] libmachine: (newest-cni-500710) DBG | exit 0
	I0717 19:52:54.753184  465898 main.go:141] libmachine: (newest-cni-500710) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:52:54.753208  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:52:54.753218  465898 main.go:141] libmachine: (newest-cni-500710) DBG | command : exit 0
	I0717 19:52:54.753229  465898 main.go:141] libmachine: (newest-cni-500710) DBG | err     : exit status 255
	I0717 19:52:54.753239  465898 main.go:141] libmachine: (newest-cni-500710) DBG | output  : 
	I0717 19:52:57.756036  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Getting to WaitForSSH function...
	I0717 19:52:57.758616  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.759012  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.759046  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.759191  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH client type: external
	I0717 19:52:57.759219  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa (-rw-------)
	I0717 19:52:57.759267  465898 main.go:141] libmachine: (newest-cni-500710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:52:57.759286  465898 main.go:141] libmachine: (newest-cni-500710) DBG | About to run SSH command:
	I0717 19:52:57.759299  465898 main.go:141] libmachine: (newest-cni-500710) DBG | exit 0
	I0717 19:52:57.884866  465898 main.go:141] libmachine: (newest-cni-500710) DBG | SSH cmd err, output: <nil>: 
	I0717 19:52:57.885287  465898 main.go:141] libmachine: (newest-cni-500710) KVM machine creation complete!
	I0717 19:52:57.885598  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:57.886228  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:57.886450  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:57.886644  465898 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:52:57.886660  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:52:57.888162  465898 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:52:57.888180  465898 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:52:57.888187  465898 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:52:57.888192  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:57.890403  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.890747  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.890776  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.890901  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:57.891056  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.891265  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.891440  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:57.891622  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:57.891829  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:57.891842  465898 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:52:57.991867  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:52:57.991887  465898 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:52:57.991895  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:57.994569  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.994918  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.994949  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.995096  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:57.995296  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.995481  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.995626  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:57.995840  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:57.996033  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:57.996046  465898 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:52:58.097422  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 19:52:58.097505  465898 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:52:58.097516  465898 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:52:58.097538  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.097810  465898 buildroot.go:166] provisioning hostname "newest-cni-500710"
	I0717 19:52:58.097837  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.098041  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.100592  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.100950  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.100968  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.101147  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.101343  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.101484  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.101627  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.101793  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.102037  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.102052  465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-500710 && echo "newest-cni-500710" | sudo tee /etc/hostname
	I0717 19:52:58.225128  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-500710
	
	I0717 19:52:58.225183  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.228010  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.228335  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.228364  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.228575  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.228785  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.228958  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.229111  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.229298  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.229520  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.229545  465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-500710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-500710/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-500710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:52:58.342896  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:52:58.342938  465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:52:58.342987  465898 buildroot.go:174] setting up certificates
	I0717 19:52:58.343005  465898 provision.go:84] configureAuth start
	I0717 19:52:58.343022  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.343341  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:58.346276  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.346751  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.346784  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.346889  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.349044  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.349386  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.349411  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.349642  465898 provision.go:143] copyHostCerts
	I0717 19:52:58.349722  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:52:58.349749  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:52:58.349836  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:52:58.349968  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:52:58.349978  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:52:58.350018  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:52:58.350115  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:52:58.350125  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:52:58.350158  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:52:58.350238  465898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.newest-cni-500710 san=[127.0.0.1 192.168.72.104 localhost minikube newest-cni-500710]
	I0717 19:52:58.503609  465898 provision.go:177] copyRemoteCerts
	I0717 19:52:58.503684  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:52:58.503717  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.506281  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.506750  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.506778  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.507009  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.507231  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.507395  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.507575  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:58.587267  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:52:58.612001  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:52:58.635418  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:52:58.660399  465898 provision.go:87] duration metric: took 317.376332ms to configureAuth
	I0717 19:52:58.660432  465898 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:52:58.660689  465898 config.go:182] Loaded profile config "newest-cni-500710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:52:58.660767  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.663622  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.663912  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.663935  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.664128  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.664340  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.664520  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.664669  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.664911  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.665111  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.665132  465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:52:58.926130  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:52:58.926163  465898 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:52:58.926175  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetURL
	I0717 19:52:58.927502  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using libvirt version 6000000
	I0717 19:52:58.929908  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.930294  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.930325  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.930510  465898 main.go:141] libmachine: Docker is up and running!
	I0717 19:52:58.930524  465898 main.go:141] libmachine: Reticulating splines...
	I0717 19:52:58.930531  465898 client.go:171] duration metric: took 24.067239354s to LocalClient.Create
	I0717 19:52:58.930555  465898 start.go:167] duration metric: took 24.067302202s to libmachine.API.Create "newest-cni-500710"
	I0717 19:52:58.930569  465898 start.go:293] postStartSetup for "newest-cni-500710" (driver="kvm2")
	I0717 19:52:58.930585  465898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:52:58.930603  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:58.930857  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:52:58.930888  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.932791  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.933115  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.933144  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.933261  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.933455  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.933596  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.933741  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.017055  465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:52:59.022210  465898 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:52:59.022243  465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:52:59.022315  465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:52:59.022390  465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:52:59.022536  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:52:59.033029  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:52:59.056620  465898 start.go:296] duration metric: took 126.019682ms for postStartSetup
	I0717 19:52:59.056673  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:59.057273  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:59.059994  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.060342  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.060373  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.060656  465898 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:52:59.060822  465898 start.go:128] duration metric: took 24.216348379s to createHost
	I0717 19:52:59.060845  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.063393  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.063716  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.063754  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.063877  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.064084  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.064258  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.064419  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.064619  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:59.064813  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:59.064826  465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:52:59.165476  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721245979.141065718
	
	I0717 19:52:59.165498  465898 fix.go:216] guest clock: 1721245979.141065718
	I0717 19:52:59.165506  465898 fix.go:229] Guest: 2024-07-17 19:52:59.141065718 +0000 UTC Remote: 2024-07-17 19:52:59.060832447 +0000 UTC m=+24.330750472 (delta=80.233271ms)
	I0717 19:52:59.165539  465898 fix.go:200] guest clock delta is within tolerance: 80.233271ms
	I0717 19:52:59.165544  465898 start.go:83] releasing machines lock for "newest-cni-500710", held for 24.32115845s
	I0717 19:52:59.165562  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.165824  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:59.168636  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.169031  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.169060  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.169185  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.169779  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.169974  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.170098  465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:52:59.170143  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.170197  465898 ssh_runner.go:195] Run: cat /version.json
	I0717 19:52:59.170219  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.173096  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173234  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173500  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.173527  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173733  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.173834  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.173868  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173897  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.174022  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.174185  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.174211  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.174348  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.174388  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.174458  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.249657  465898 ssh_runner.go:195] Run: systemctl --version
	I0717 19:52:59.277869  465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:52:59.441843  465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:52:59.448001  465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:52:59.448078  465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:52:59.468227  465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:52:59.468265  465898 start.go:495] detecting cgroup driver to use...
	I0717 19:52:59.468347  465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:52:59.491442  465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:52:59.506251  465898 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:52:59.506348  465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:52:59.519939  465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:52:59.533404  465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:52:59.654673  465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:52:59.806971  465898 docker.go:233] disabling docker service ...
	I0717 19:52:59.807068  465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:52:59.821705  465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:52:59.835046  465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:52:59.982140  465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:53:00.110908  465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:53:00.126060  465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:53:00.145395  465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:53:00.145472  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.157222  465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:53:00.157298  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.167978  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.179059  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.190133  465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:53:00.201263  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.212434  465898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.230560  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.241400  465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:53:00.250916  465898 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:53:00.250963  465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:53:00.263667  465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:53:00.273256  465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:00.392220  465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:53:00.554438  465898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:53:00.554529  465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:53:00.560098  465898 start.go:563] Will wait 60s for crictl version
	I0717 19:53:00.560155  465898 ssh_runner.go:195] Run: which crictl
	I0717 19:53:00.564406  465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:53:00.603169  465898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:53:00.603264  465898 ssh_runner.go:195] Run: crio --version
	I0717 19:53:00.634731  465898 ssh_runner.go:195] Run: crio --version
	I0717 19:53:00.666668  465898 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:53:00.667952  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:00.670693  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:00.671029  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:00.671050  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:00.671291  465898 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:53:00.675824  465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:00.690521  465898 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 19:53:00.691904  465898 kubeadm.go:883] updating cluster {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:53:00.692059  465898 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:53:00.692134  465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:00.726968  465898 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:53:00.727059  465898 ssh_runner.go:195] Run: which lz4
	I0717 19:53:00.731300  465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:53:00.735768  465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:53:00.735812  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0717 19:53:02.163077  465898 crio.go:462] duration metric: took 1.431804194s to copy over tarball
	I0717 19:53:02.163158  465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:53:04.258718  465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.095531457s)
	I0717 19:53:04.258753  465898 crio.go:469] duration metric: took 2.095647704s to extract the tarball
	I0717 19:53:04.258760  465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:53:04.297137  465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:04.347484  465898 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:53:04.347515  465898 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:53:04.347527  465898 kubeadm.go:934] updating node { 192.168.72.104 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:53:04.347703  465898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-500710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:53:04.347840  465898 ssh_runner.go:195] Run: crio config
	I0717 19:53:04.405395  465898 cni.go:84] Creating CNI manager for ""
	I0717 19:53:04.405416  465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:53:04.405426  465898 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0717 19:53:04.405456  465898 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-500710 NodeName:newest-cni-500710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:53:04.405637  465898 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-500710"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:53:04.405717  465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:53:04.416292  465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:53:04.416382  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:53:04.427433  465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0717 19:53:04.445729  465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:53:04.463077  465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0717 19:53:04.480892  465898 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0717 19:53:04.484946  465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:04.498690  465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:04.638586  465898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:53:04.656982  465898 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710 for IP: 192.168.72.104
	I0717 19:53:04.657011  465898 certs.go:194] generating shared ca certs ...
	I0717 19:53:04.657038  465898 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.657256  465898 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:53:04.657320  465898 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:53:04.657334  465898 certs.go:256] generating profile certs ...
	I0717 19:53:04.657410  465898 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key
	I0717 19:53:04.657441  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt with IP's: []
	I0717 19:53:04.802854  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt ...
	I0717 19:53:04.802892  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt: {Name:mkbdc92807370e837be9fde73dc8b8e0802b90f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.803105  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key ...
	I0717 19:53:04.803122  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key: {Name:mk116d2e8b66bda777a94ad74c0c061d9e613c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.803243  465898 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261
	I0717 19:53:04.803267  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.104]
	I0717 19:53:04.894397  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 ...
	I0717 19:53:04.894427  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261: {Name:mk7dcae462b907b4660fd05a42e1f25ce611240d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.894589  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261 ...
	I0717 19:53:04.894602  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261: {Name:mk3d09d7d7cabc56058b70a05ca2d9fbeaa09b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.894680  465898 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt
	I0717 19:53:04.894789  465898 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key
	I0717 19:53:04.894855  465898 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key
	I0717 19:53:04.894873  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt with IP's: []
	I0717 19:53:05.040563  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt ...
	I0717 19:53:05.040607  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt: {Name:mkd4b4e27352e6c439affe243713209d4973dc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:05.040847  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key ...
	I0717 19:53:05.040872  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key: {Name:mk4fdf3be97ac5b32e32636ab03155d9176e2950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:05.041078  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:53:05.041115  465898 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:53:05.041122  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:53:05.041153  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:53:05.041174  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:53:05.041195  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:53:05.041229  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:53:05.041996  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:53:05.069631  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:53:05.095458  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:53:05.119956  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:53:05.144412  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:53:05.168926  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:53:05.194999  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:53:05.219901  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:53:05.246572  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:53:05.273077  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:53:05.298253  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:53:05.323163  465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:53:05.340280  465898 ssh_runner.go:195] Run: openssl version
	I0717 19:53:05.346165  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:53:05.356700  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.361276  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.361321  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.367234  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:53:05.378121  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:53:05.389168  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.394077  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.394142  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.399947  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:53:05.411101  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:53:05.422899  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.428001  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.428069  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.434134  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:53:05.448695  465898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:53:05.453265  465898 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:53:05.453330  465898 kubeadm.go:392] StartCluster: {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:53:05.453418  465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:53:05.453481  465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:53:05.521089  465898 cri.go:89] found id: ""
	I0717 19:53:05.521174  465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:53:05.533298  465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:53:05.545777  465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:53:05.558012  465898 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:53:05.558044  465898 kubeadm.go:157] found existing configuration files:
	
	I0717 19:53:05.558103  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:53:05.568744  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:53:05.568824  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:53:05.580464  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:53:05.590280  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:53:05.590344  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:53:05.601812  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:53:05.611595  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:53:05.611655  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:53:05.623287  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:53:05.633221  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:53:05.633288  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:53:05.644991  465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:53:05.758644  465898 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 19:53:05.758757  465898 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:53:05.881869  465898 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:53:05.882091  465898 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:53:05.882257  465898 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 19:53:05.900635  465898 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:53:06.185968  465898 out.go:204]   - Generating certificates and keys ...
	I0717 19:53:06.186135  465898 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:53:06.186216  465898 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:53:06.186298  465898 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:53:06.232108  465898 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:53:06.503680  465898 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:53:06.611201  465898 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:53:06.770058  465898 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:53:06.770203  465898 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-500710] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0717 19:53:07.048582  465898 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:53:07.048824  465898 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-500710] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0717 19:53:07.265137  465898 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:53:07.474647  465898 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:53:07.785708  465898 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:53:07.785798  465898 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:53:07.922554  465898 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:53:08.134410  465898 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:53:08.298182  465898 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:53:08.486413  465898 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:53:08.663674  465898 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:53:08.664238  465898 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:53:08.667348  465898 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:53:08.669554  465898 out.go:204]   - Booting up control plane ...
	I0717 19:53:08.669724  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:53:08.669836  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:53:08.670927  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:53:08.689791  465898 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:53:08.696429  465898 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:53:08.696502  465898 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:53:08.845868  465898 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:53:08.846020  465898 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:53:09.345273  465898 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.562429ms
	I0717 19:53:09.345351  465898 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.276502623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245990276275236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14936ef2-a28e-4449-9bf4-003b68fa2371 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.277363895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c86b0d4-7e4b-4295-a088-b62977708ef0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.277432211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c86b0d4-7e4b-4295-a088-b62977708ef0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.277754813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c86b0d4-7e4b-4295-a088-b62977708ef0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.328893288Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43360069-6be9-407b-a480-7f55e7f13ce7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.328997231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43360069-6be9-407b-a480-7f55e7f13ce7 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.330511473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd336567-a59d-4a1d-872b-8407d21a2bd6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.331024610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245990330984866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd336567-a59d-4a1d-872b-8407d21a2bd6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.331751992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=538aa690-f338-444c-be48-bb99b2010d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.331804975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=538aa690-f338-444c-be48-bb99b2010d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.332067506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=538aa690-f338-444c-be48-bb99b2010d21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.377784890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4103d2e9-a08d-43b5-bcbe-0efbbfb5d1e8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.377862426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4103d2e9-a08d-43b5-bcbe-0efbbfb5d1e8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.379153391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ef4f717-2de9-44d5-b6ca-3c42586e49ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.379584107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245990379559288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ef4f717-2de9-44d5-b6ca-3c42586e49ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.380791585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c5c6090-63df-45ae-a4f5-c6c15ea07178 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.380872719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c5c6090-63df-45ae-a4f5-c6c15ea07178 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.381171118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c5c6090-63df-45ae-a4f5-c6c15ea07178 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.429966247Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e915a8b-ffce-4e2f-817f-2c2413d2e2b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.430092429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e915a8b-ffce-4e2f-817f-2c2413d2e2b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.431499947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=130d1785-5aa1-47d1-8bb7-76efee4ae7e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.431983534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245990431954162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=130d1785-5aa1-47d1-8bb7-76efee4ae7e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.432677479Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f9d1f93-d976-40a0-b7f0-106a3731b8d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.432790807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f9d1f93-d976-40a0-b7f0-106a3731b8d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:10 no-preload-713715 crio[735]: time="2024-07-17 19:53:10.433035146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721244827911062994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e205ae72a0b56cc35866955025cb089dee7c1709703b44d301f533a070699c96,PodSandboxId:68f1705638706c37ab2f51ff381dfcf98532f7d2d191a82d8862873af0f05610,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721244808288019270,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75d9f921-4990-4f7c-99d5-f2976d35cd5d,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002,PodSandboxId:e0d7cb5205bf86d52581a3613db71edac9d0c7ef38e7d2d3b938120afcf97cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721244805202880065,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-hk8t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77,PodSandboxId:019ac1b79365ae4ac94c855a8c330ecc72a2bfed5a5ebc1baa4e06ea33f693a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721244797137140201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x85f5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaaf7ad0-8b1f-483c-97
7b-71ca6f2808c4,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe,PodSandboxId:7bea569d68669bce5032544241dd0ffd6fba7887bb2ee96886cc8f58ae38b7af,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721244797230716556,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 785118d7-5d47-42fb-a3be-a13f7a837b
2b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5,PodSandboxId:9a227e350da7ee752414b807ad484d43a843f3b32876f2b25676401bb0e3fb72,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721244792441994781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f47ac0a43f0e1d61
3a6c5abca3b9fb6c,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df,PodSandboxId:d2fb4c975840ed4de46f2e3aa48c65553a74f30d31002f1919a5b46ec691b5f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721244792410513739,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14aefd201618a5b2395b71f20510c
fb7,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0,PodSandboxId:cd850abcfceb77e39ac1f6bd317bda2d4b106fad5dbdc756a9e6b1fe7bc475f7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721244792345138290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077876edb82a9270e4e34baa8fae306c,},Annotations:map[string]string{io.kube
rnetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5,PodSandboxId:b48faef64e337c26eeb2ab8fa47848edd5a2481a7632580820da1c0e45761d39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721244792305091129,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-713715,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 795262caeee8afbec1e31fd0b6f3a9e1,},Annotations:map[string]string{io.kubernetes.contain
er.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f9d1f93-d976-40a0-b7f0-106a3731b8d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2b43922786ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   7bea569d68669       storage-provisioner
	e205ae72a0b56       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   68f1705638706       busybox
	9015174934a8d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   e0d7cb5205bf8       coredns-5cfdc65f69-hk8t7
	7511bf4f30ac3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   7bea569d68669       storage-provisioner
	ab5470bd76139       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899                                      19 minutes ago      Running             kube-proxy                1                   019ac1b79365a       kube-proxy-x85f5
	e14420efe38fa       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5                                      19 minutes ago      Running             kube-controller-manager   1                   9a227e350da7e       kube-controller-manager-no-preload-713715
	5b404425859ea       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b                                      19 minutes ago      Running             kube-scheduler            1                   d2fb4c975840e       kube-scheduler-no-preload-713715
	ade9a3d882a93       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa                                      19 minutes ago      Running             etcd                      1                   cd850abcfceb7       etcd-no-preload-713715
	94d1d32be33b0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938                                      19 minutes ago      Running             kube-apiserver            1                   b48faef64e337       kube-apiserver-no-preload-713715
	
	
	==> coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41862 - 58330 "HINFO IN 1092279852445007707.3091120396433038524. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010275096s
	
	
	==> describe nodes <==
	Name:               no-preload-713715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-713715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=no-preload-713715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_25_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:25:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-713715
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:53:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:49:06 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:49:06 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:49:06 +0000   Wed, 17 Jul 2024 19:24:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:49:06 +0000   Wed, 17 Jul 2024 19:33:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.66
	  Hostname:    no-preload-713715
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf73da8038174625a1d5606b328ec5a5
	  System UUID:                bf73da80-3817-4625-a1d5-606b328ec5a5
	  Boot ID:                    2eb53699-ac94-4175-a9fd-bae1ddb1628c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5cfdc65f69-hk8t7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-713715                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-713715             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-713715    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-x85f5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-713715             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-78fcd8795b-q2jgb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-713715 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-713715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-713715 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                28m                kubelet          Node no-preload-713715 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-713715 event: Registered Node no-preload-713715 in Controller
	  Normal  CIDRAssignmentFailed     28m                cidrAllocator    Node no-preload-713715 status is now: CIDRAssignmentFailed
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-713715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-713715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-713715 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-713715 event: Registered Node no-preload-713715 in Controller
	
	
	==> dmesg <==
	[Jul17 19:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050198] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040004] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.530370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.388984] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.112676] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.056336] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065192] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.205948] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.140136] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.311341] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[Jul17 19:33] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.058452] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981389] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +3.511450] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.808199] kauditd_printk_skb: 37 callbacks suppressed
	[  +0.722114] systemd-fstab-generator[2057]: Ignoring "noauto" option for root device
	[  +4.856604] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] <==
	{"level":"warn","ts":"2024-07-17T19:34:00.6001Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.114448Z","time spent":"485.640285ms","remote":"127.0.0.1:48958","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-17T19:34:00.600592Z","caller":"traceutil/trace.go:171","msg":"trace[2068860945] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:596; }","duration":"634.806147ms","start":"2024-07-17T19:33:59.965777Z","end":"2024-07-17T19:34:00.600583Z","steps":["trace[2068860945] 'agreement among raft nodes before linearized reading'  (duration: 632.416438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:00.601159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:33:59.965754Z","time spent":"635.202055ms","remote":"127.0.0.1:48826","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":157,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
	{"level":"info","ts":"2024-07-17T19:34:00.600752Z","caller":"traceutil/trace.go:171","msg":"trace[1561281354] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb; range_end:; response_count:1; response_revision:596; }","duration":"420.060105ms","start":"2024-07-17T19:34:00.180682Z","end":"2024-07-17T19:34:00.600742Z","steps":["trace[1561281354] 'agreement among raft nodes before linearized reading'  (duration: 418.974135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:00.601469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.180647Z","time spent":"420.811272ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4363,"request content":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" "}
	{"level":"warn","ts":"2024-07-17T19:34:01.30624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"424.561031ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:34:01.306446Z","caller":"traceutil/trace.go:171","msg":"trace[1464831555] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:597; }","duration":"424.8001ms","start":"2024-07-17T19:34:00.881633Z","end":"2024-07-17T19:34:01.306433Z","steps":["trace[1464831555] 'range keys from in-memory index tree'  (duration: 424.549355ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.306763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.037806ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:34:01.306886Z","caller":"traceutil/trace.go:171","msg":"trace[1816133111] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:597; }","duration":"416.165039ms","start":"2024-07-17T19:34:00.890711Z","end":"2024-07-17T19:34:01.306876Z","steps":["trace[1816133111] 'range keys from in-memory index tree'  (duration: 416.029974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.672795ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6987775460532258506 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" mod_revision:568 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" value_size:830 lease:6987775460532257839 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:34:01.307179Z","caller":"traceutil/trace.go:171","msg":"trace[718222474] linearizableReadLoop","detail":"{readStateIndex:643; appliedIndex:642; }","duration":"641.921059ms","start":"2024-07-17T19:34:00.665252Z","end":"2024-07-17T19:34:01.307173Z","steps":["trace[718222474] 'read index received'  (duration: 263.916713ms)","trace[718222474] 'applied index is now lower than readState.Index'  (duration: 378.003281ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:34:01.307251Z","caller":"traceutil/trace.go:171","msg":"trace[732177036] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"695.093444ms","start":"2024-07-17T19:34:00.61215Z","end":"2024-07-17T19:34:01.307244Z","steps":["trace[732177036] 'process raft request'  (duration: 317.207082ms)","trace[732177036] 'compare'  (duration: 377.122159ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:34:01.30739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.612141Z","time spent":"695.141402ms","remote":"127.0.0.1:48880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":925,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" mod_revision:568 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" value_size:830 lease:6987775460532257839 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-78fcd8795b-q2jgb.17e317076ac668b8\" > >"}
	{"level":"warn","ts":"2024-07-17T19:34:01.307588Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"695.543175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:475"}
	{"level":"info","ts":"2024-07-17T19:34:01.307635Z","caller":"traceutil/trace.go:171","msg":"trace[234859674] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:598; }","duration":"695.59045ms","start":"2024-07-17T19:34:00.612038Z","end":"2024-07-17T19:34:01.307628Z","steps":["trace[234859674] 'agreement among raft nodes before linearized reading'  (duration: 695.498987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.612023Z","time spent":"695.643812ms","remote":"127.0.0.1:49028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":499,"request content":"key:\"/registry/endpointslices/default/kubernetes\" "}
	{"level":"warn","ts":"2024-07-17T19:34:01.307861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"626.99886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" ","response":"range_response_count:1 size:4339"}
	{"level":"info","ts":"2024-07-17T19:34:01.307903Z","caller":"traceutil/trace.go:171","msg":"trace[1564775303] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb; range_end:; response_count:1; response_revision:598; }","duration":"627.041019ms","start":"2024-07-17T19:34:00.680856Z","end":"2024-07-17T19:34:01.307897Z","steps":["trace[1564775303] 'agreement among raft nodes before linearized reading'  (duration: 626.949392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:34:01.307947Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:34:00.680814Z","time spent":"627.127734ms","remote":"127.0.0.1:48964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4363,"request content":"key:\"/registry/pods/kube-system/metrics-server-78fcd8795b-q2jgb\" "}
	{"level":"info","ts":"2024-07-17T19:43:14.54714Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":817}
	{"level":"info","ts":"2024-07-17T19:43:14.557083Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":817,"took":"9.528478ms","hash":3334974253,"current-db-size-bytes":2580480,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2580480,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-17T19:43:14.557176Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3334974253,"revision":817,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T19:48:14.555373Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1059}
	{"level":"info","ts":"2024-07-17T19:48:14.559209Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1059,"took":"3.491088ms","hash":3878137437,"current-db-size-bytes":2580480,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T19:48:14.559261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3878137437,"revision":1059,"compact-revision":817}
	
	
	==> kernel <==
	 19:53:10 up 20 min,  0 users,  load average: 0.03, 0.19, 0.18
	Linux no-preload-713715 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] <==
	W0717 19:48:17.303123       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:48:17.303388       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 19:48:17.304479       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:48:17.304470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:49:17.304996       1 handler_proxy.go:99] no RequestInfo found in the context
	W0717 19:49:17.305020       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:49:17.305357       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0717 19:49:17.305250       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 19:49:17.306549       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:49:17.306575       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:51:17.306781       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:51:17.307189       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 19:51:17.307076       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:51:17.307268       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 19:51:17.308935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:51:17.309024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] <==
	E0717 19:47:54.368779       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:47:54.390095       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:48:24.378024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:48:24.399611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:48:54.386769       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:48:54.407873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:49:06.324752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-713715"
	E0717 19:49:24.394039       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:49:24.415219       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:49:30.708737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="268.687µs"
	I0717 19:49:42.712007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="132.173µs"
	E0717 19:49:54.400411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:49:54.422431       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:50:24.406857       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:50:24.429988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:50:54.415280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:50:54.440084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:24.421040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:51:24.450628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:54.427642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:51:54.458473       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:24.434243       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:52:24.465981       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:54.441007       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:52:54.473885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 19:33:17.553563       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 19:33:17.567810       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.66"]
	E0717 19:33:17.567899       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 19:33:17.608026       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 19:33:17.608075       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:33:17.608146       1 server_linux.go:170] "Using iptables Proxier"
	I0717 19:33:17.611039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 19:33:17.611388       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 19:33:17.611475       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:33:17.613156       1 config.go:197] "Starting service config controller"
	I0717 19:33:17.613189       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:33:17.613215       1 config.go:104] "Starting endpoint slice config controller"
	I0717 19:33:17.613219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:33:17.615271       1 config.go:326] "Starting node config controller"
	I0717 19:33:17.615363       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:33:17.713446       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:33:17.713514       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:33:17.715529       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] <==
	I0717 19:33:13.332027       1 serving.go:386] Generated self-signed cert in-memory
	W0717 19:33:16.167961       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:33:16.168135       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:33:16.168254       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:33:16.168289       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:33:16.216825       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 19:33:16.219082       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:33:16.226514       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 19:33:16.226796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 19:33:16.226649       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	W0717 19:33:16.290728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:33:16.291953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 19:33:16.291507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:33:16.294567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0717 19:33:16.291524       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:33:16.395595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:50:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:50:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:50:25 no-preload-713715 kubelet[1316]: E0717 19:50:25.693432    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:50:39 no-preload-713715 kubelet[1316]: E0717 19:50:39.693799    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:50:52 no-preload-713715 kubelet[1316]: E0717 19:50:52.693755    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:51:03 no-preload-713715 kubelet[1316]: E0717 19:51:03.693947    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:51:11 no-preload-713715 kubelet[1316]: E0717 19:51:11.708200    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:51:11 no-preload-713715 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:51:11 no-preload-713715 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:51:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:51:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:51:17 no-preload-713715 kubelet[1316]: E0717 19:51:17.693000    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:51:32 no-preload-713715 kubelet[1316]: E0717 19:51:32.693022    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:51:46 no-preload-713715 kubelet[1316]: E0717 19:51:46.693498    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:51:59 no-preload-713715 kubelet[1316]: E0717 19:51:59.692755    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:52:11 no-preload-713715 kubelet[1316]: E0717 19:52:11.706415    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:52:11 no-preload-713715 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:52:11 no-preload-713715 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:52:11 no-preload-713715 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:52:11 no-preload-713715 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:52:12 no-preload-713715 kubelet[1316]: E0717 19:52:12.693733    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:52:27 no-preload-713715 kubelet[1316]: E0717 19:52:27.694877    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:52:40 no-preload-713715 kubelet[1316]: E0717 19:52:40.693622    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:52:53 no-preload-713715 kubelet[1316]: E0717 19:52:53.695064    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	Jul 17 19:53:07 no-preload-713715 kubelet[1316]: E0717 19:53:07.695149    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-q2jgb" podUID="4e882d43-dbeb-467a-980f-095e1f79dcf2"
	
	
	==> storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] <==
	I0717 19:33:17.424181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 19:33:47.429109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] <==
	I0717 19:33:48.019405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:33:48.032557       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:33:48.032706       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:33:48.040573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:33:48.040809       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7!
	I0717 19:33:48.041799       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"482f3537-670c-4054-ac22-126ea9033289", APIVersion:"v1", ResourceVersion:"583", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7 became leader
	I0717 19:33:48.141848       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-713715_070b5367-188f-4189-af1f-8086de9b29b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-713715 -n no-preload-713715
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-713715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-q2jgb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb: exit status 1 (74.406902ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-q2jgb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-713715 describe pod metrics-server-78fcd8795b-q2jgb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (382.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (400.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:54:18.406515118 +0000 UTC m=+6714.723532783
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-378944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.198µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-378944 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-378944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-378944 logs -n 25: (1.130612896s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC | 17 Jul 24 19:52 UTC |
	| start   | -p newest-cni-500710 --memory=2200 --alsologtostderr   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC | 17 Jul 24 19:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	| delete  | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	| addons  | enable metrics-server -p newest-cni-500710             | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-500710                                   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-500710                  | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-500710 --memory=2200 --alsologtostderr   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-500710 image list                           | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:54 UTC | 17 Jul 24 19:54 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-500710                                   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:54 UTC | 17 Jul 24 19:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-500710                                   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:54 UTC | 17 Jul 24 19:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-500710                                   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:54 UTC | 17 Jul 24 19:54 UTC |
	| delete  | -p newest-cni-500710                                   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:54 UTC | 17 Jul 24 19:54 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:53:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:53:30.949332  466832 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:53:30.949428  466832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:53:30.949435  466832 out.go:304] Setting ErrFile to fd 2...
	I0717 19:53:30.949439  466832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:53:30.949586  466832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:53:30.950114  466832 out.go:298] Setting JSON to false
	I0717 19:53:30.951027  466832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12954,"bootTime":1721233057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:53:30.951083  466832 start.go:139] virtualization: kvm guest
	I0717 19:53:30.953272  466832 out.go:177] * [newest-cni-500710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:53:30.954721  466832 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:53:30.954782  466832 notify.go:220] Checking for updates...
	I0717 19:53:30.957396  466832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:53:30.958756  466832 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:53:30.960012  466832 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:53:30.961223  466832 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:53:30.962467  466832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:53:30.963983  466832 config.go:182] Loaded profile config "newest-cni-500710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:53:30.964452  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:53:30.964542  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:53:30.979526  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0717 19:53:30.979899  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:53:30.980455  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:53:30.980478  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:53:30.980862  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:53:30.981109  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:30.981406  466832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:53:30.981726  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:53:30.981773  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:53:30.996870  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0717 19:53:30.997368  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:53:30.998002  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:53:30.998036  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:53:30.998374  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:53:30.998626  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:31.034797  466832 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:53:31.036194  466832 start.go:297] selected driver: kvm2
	I0717 19:53:31.036215  466832 start.go:901] validating driver "kvm2" against &{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:53:31.036351  466832 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:53:31.037135  466832 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:53:31.037227  466832 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:53:31.052590  466832 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:53:31.053047  466832 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:53:31.053081  466832 cni.go:84] Creating CNI manager for ""
	I0717 19:53:31.053090  466832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:53:31.053146  466832 start.go:340] cluster config:
	{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:53:31.053307  466832 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:53:31.055337  466832 out.go:177] * Starting "newest-cni-500710" primary control-plane node in "newest-cni-500710" cluster
	I0717 19:53:31.056760  466832 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:53:31.056801  466832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:53:31.056832  466832 cache.go:56] Caching tarball of preloaded images
	I0717 19:53:31.056913  466832 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:53:31.056923  466832 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:53:31.057070  466832 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:53:31.057250  466832 start.go:360] acquireMachinesLock for newest-cni-500710: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:53:31.057298  466832 start.go:364] duration metric: took 29.883µs to acquireMachinesLock for "newest-cni-500710"
	I0717 19:53:31.057313  466832 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:53:31.057320  466832 fix.go:54] fixHost starting: 
	I0717 19:53:31.057588  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:53:31.057643  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:53:31.072836  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0717 19:53:31.073268  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:53:31.073743  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:53:31.073764  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:53:31.074265  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:53:31.074481  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:31.074648  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:53:31.076308  466832 fix.go:112] recreateIfNeeded on newest-cni-500710: state=Stopped err=<nil>
	I0717 19:53:31.076329  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	W0717 19:53:31.076499  466832 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:53:31.078424  466832 out.go:177] * Restarting existing kvm2 VM for "newest-cni-500710" ...
	I0717 19:53:31.079654  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Start
	I0717 19:53:31.079844  466832 main.go:141] libmachine: (newest-cni-500710) Ensuring networks are active...
	I0717 19:53:31.080616  466832 main.go:141] libmachine: (newest-cni-500710) Ensuring network default is active
	I0717 19:53:31.081024  466832 main.go:141] libmachine: (newest-cni-500710) Ensuring network mk-newest-cni-500710 is active
	I0717 19:53:31.081390  466832 main.go:141] libmachine: (newest-cni-500710) Getting domain xml...
	I0717 19:53:31.082157  466832 main.go:141] libmachine: (newest-cni-500710) Creating domain...
	I0717 19:53:32.278580  466832 main.go:141] libmachine: (newest-cni-500710) Waiting to get IP...
	I0717 19:53:32.279502  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:32.279901  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:32.279959  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:32.279884  466867 retry.go:31] will retry after 274.178591ms: waiting for machine to come up
	I0717 19:53:32.555447  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:32.555994  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:32.556062  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:32.555968  466867 retry.go:31] will retry after 304.734144ms: waiting for machine to come up
	I0717 19:53:32.862503  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:32.862992  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:32.863022  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:32.862937  466867 retry.go:31] will retry after 339.85799ms: waiting for machine to come up
	I0717 19:53:33.204401  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:33.204954  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:33.204983  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:33.204900  466867 retry.go:31] will retry after 529.245854ms: waiting for machine to come up
	I0717 19:53:33.735464  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:33.735912  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:33.735965  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:33.735850  466867 retry.go:31] will retry after 652.687251ms: waiting for machine to come up
	I0717 19:53:34.390659  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:34.391152  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:34.391199  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:34.391109  466867 retry.go:31] will retry after 716.681767ms: waiting for machine to come up
	I0717 19:53:35.109084  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:35.109472  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:35.109502  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:35.109414  466867 retry.go:31] will retry after 823.282182ms: waiting for machine to come up
	I0717 19:53:35.934400  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:35.934843  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:35.934873  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:35.934780  466867 retry.go:31] will retry after 1.482859686s: waiting for machine to come up
	I0717 19:53:37.420179  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:37.420704  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:37.420734  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:37.420657  466867 retry.go:31] will retry after 1.817614574s: waiting for machine to come up
	I0717 19:53:39.239683  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:39.240140  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:39.240161  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:39.240087  466867 retry.go:31] will retry after 2.089300454s: waiting for machine to come up
	I0717 19:53:41.330640  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:41.331127  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:41.331157  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:41.331078  466867 retry.go:31] will retry after 2.136565472s: waiting for machine to come up
	I0717 19:53:43.469467  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:43.469925  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:43.469950  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:43.469872  466867 retry.go:31] will retry after 3.405962715s: waiting for machine to come up
	I0717 19:53:46.876976  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:46.877420  466832 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:53:46.877455  466832 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:53:46.877363  466867 retry.go:31] will retry after 3.11458024s: waiting for machine to come up
	I0717 19:53:49.995575  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:49.996026  466832 main.go:141] libmachine: (newest-cni-500710) Found IP for machine: 192.168.72.104
	I0717 19:53:49.996048  466832 main.go:141] libmachine: (newest-cni-500710) Reserving static IP address...
	I0717 19:53:49.996081  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has current primary IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:49.996584  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "newest-cni-500710", mac: "52:54:00:9b:88:f9", ip: "192.168.72.104"} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:49.996617  466832 main.go:141] libmachine: (newest-cni-500710) DBG | skip adding static IP to network mk-newest-cni-500710 - found existing host DHCP lease matching {name: "newest-cni-500710", mac: "52:54:00:9b:88:f9", ip: "192.168.72.104"}
	I0717 19:53:49.996685  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Getting to WaitForSSH function...
	I0717 19:53:49.996740  466832 main.go:141] libmachine: (newest-cni-500710) Reserved static IP address: 192.168.72.104
	I0717 19:53:49.996761  466832 main.go:141] libmachine: (newest-cni-500710) Waiting for SSH to be available...
	I0717 19:53:49.998628  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:49.998949  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:49.998982  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:49.999044  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH client type: external
	I0717 19:53:49.999084  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa (-rw-------)
	I0717 19:53:49.999133  466832 main.go:141] libmachine: (newest-cni-500710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:53:49.999164  466832 main.go:141] libmachine: (newest-cni-500710) DBG | About to run SSH command:
	I0717 19:53:49.999178  466832 main.go:141] libmachine: (newest-cni-500710) DBG | exit 0
	I0717 19:53:50.128860  466832 main.go:141] libmachine: (newest-cni-500710) DBG | SSH cmd err, output: <nil>: 
	I0717 19:53:50.129239  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:53:50.129901  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:50.132180  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.132478  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.132519  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.132734  466832 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:53:50.132934  466832 machine.go:94] provisionDockerMachine start ...
	I0717 19:53:50.132954  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:50.133219  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.135517  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.135863  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.135893  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.136030  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:50.136217  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.136393  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.136510  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:50.136697  466832 main.go:141] libmachine: Using SSH client type: native
	I0717 19:53:50.136894  466832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:53:50.136910  466832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:53:50.252969  466832 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:53:50.253000  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:53:50.253245  466832 buildroot.go:166] provisioning hostname "newest-cni-500710"
	I0717 19:53:50.253278  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:53:50.253493  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.256088  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.256474  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.256523  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.256622  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:50.256827  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.256993  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.257156  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:50.257385  466832 main.go:141] libmachine: Using SSH client type: native
	I0717 19:53:50.257618  466832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:53:50.257631  466832 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-500710 && echo "newest-cni-500710" | sudo tee /etc/hostname
	I0717 19:53:50.386919  466832 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-500710
	
	I0717 19:53:50.386972  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.389581  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.389941  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.389982  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.390143  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:50.390366  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.390531  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.390672  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:50.390844  466832 main.go:141] libmachine: Using SSH client type: native
	I0717 19:53:50.391081  466832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:53:50.391099  466832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-500710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-500710/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-500710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:53:50.514492  466832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:53:50.514530  466832 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:53:50.514573  466832 buildroot.go:174] setting up certificates
	I0717 19:53:50.514583  466832 provision.go:84] configureAuth start
	I0717 19:53:50.514598  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:53:50.514922  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:50.517765  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.518149  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.518181  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.518309  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.521182  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.521490  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.521541  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.521678  466832 provision.go:143] copyHostCerts
	I0717 19:53:50.521746  466832 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:53:50.521757  466832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:53:50.521819  466832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:53:50.521930  466832 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:53:50.521945  466832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:53:50.521971  466832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:53:50.522043  466832 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:53:50.522051  466832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:53:50.522072  466832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:53:50.522134  466832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.newest-cni-500710 san=[127.0.0.1 192.168.72.104 localhost minikube newest-cni-500710]
	I0717 19:53:50.761856  466832 provision.go:177] copyRemoteCerts
	I0717 19:53:50.761910  466832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:53:50.761936  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.764349  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.764635  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.764674  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.764796  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:50.765011  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.765183  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:50.765311  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:53:50.851182  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:53:50.876574  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:53:50.900992  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:53:50.926252  466832 provision.go:87] duration metric: took 411.64607ms to configureAuth
	I0717 19:53:50.926292  466832 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:53:50.926504  466832 config.go:182] Loaded profile config "newest-cni-500710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:53:50.926665  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:50.929358  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.929665  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:50.929694  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:50.929933  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:50.930262  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.930466  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:50.930615  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:50.930910  466832 main.go:141] libmachine: Using SSH client type: native
	I0717 19:53:50.931129  466832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:53:50.931152  466832 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:53:51.234388  466832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:53:51.234420  466832 machine.go:97] duration metric: took 1.101470981s to provisionDockerMachine
	I0717 19:53:51.234435  466832 start.go:293] postStartSetup for "newest-cni-500710" (driver="kvm2")
	I0717 19:53:51.234452  466832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:53:51.234482  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:51.234869  466832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:53:51.234902  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:51.237589  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.237915  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:51.237949  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.238075  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:51.238283  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:51.238438  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:51.238580  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:53:51.327558  466832 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:53:51.332076  466832 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:53:51.332105  466832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:53:51.332174  466832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:53:51.332244  466832 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:53:51.332343  466832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:53:51.342677  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:53:51.368542  466832 start.go:296] duration metric: took 134.091294ms for postStartSetup
	I0717 19:53:51.368593  466832 fix.go:56] duration metric: took 20.311272147s for fixHost
	I0717 19:53:51.368618  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:51.371648  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.372066  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:51.372102  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.372289  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:51.372512  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:51.372676  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:51.372824  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:51.373001  466832 main.go:141] libmachine: Using SSH client type: native
	I0717 19:53:51.373194  466832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:53:51.373206  466832 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:53:51.489371  466832 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721246031.454198568
	
	I0717 19:53:51.489407  466832 fix.go:216] guest clock: 1721246031.454198568
	I0717 19:53:51.489418  466832 fix.go:229] Guest: 2024-07-17 19:53:51.454198568 +0000 UTC Remote: 2024-07-17 19:53:51.368599418 +0000 UTC m=+20.453491518 (delta=85.59915ms)
	I0717 19:53:51.489465  466832 fix.go:200] guest clock delta is within tolerance: 85.59915ms
	I0717 19:53:51.489473  466832 start.go:83] releasing machines lock for "newest-cni-500710", held for 20.43216469s
	I0717 19:53:51.489498  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:51.489788  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:51.492266  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.492632  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:51.492662  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.492797  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:51.493289  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:51.493468  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:53:51.493553  466832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:53:51.493612  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:51.493711  466832 ssh_runner.go:195] Run: cat /version.json
	I0717 19:53:51.493738  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:53:51.496339  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.496531  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.496764  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:51.496797  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.496881  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:51.496915  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:51.496978  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:51.497066  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:53:51.497099  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:51.497225  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:53:51.497236  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:51.497378  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:53:51.497438  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:53:51.497551  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:53:51.597311  466832 ssh_runner.go:195] Run: systemctl --version
	I0717 19:53:51.603201  466832 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:53:51.756059  466832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:53:51.762720  466832 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:53:51.762803  466832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:53:51.778804  466832 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:53:51.778830  466832 start.go:495] detecting cgroup driver to use...
	I0717 19:53:51.778887  466832 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:53:51.794876  466832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:53:51.808932  466832 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:53:51.808992  466832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:53:51.823258  466832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:53:51.837883  466832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:53:51.965799  466832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:53:52.149964  466832 docker.go:233] disabling docker service ...
	I0717 19:53:52.150051  466832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:53:52.164390  466832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:53:52.178694  466832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:53:52.313298  466832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:53:52.430191  466832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:53:52.444973  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:53:52.463380  466832 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:53:52.463441  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.473722  466832 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:53:52.473810  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.484124  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.494542  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.504446  466832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:53:52.514500  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.524544  466832 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.540811  466832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:52.550990  466832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:53:52.560434  466832 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:53:52.560520  466832 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:53:52.574463  466832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:53:52.584119  466832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:52.714448  466832 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:53:52.849938  466832 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:53:52.850006  466832 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:53:52.854695  466832 start.go:563] Will wait 60s for crictl version
	I0717 19:53:52.854746  466832 ssh_runner.go:195] Run: which crictl
	I0717 19:53:52.858552  466832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:53:52.897214  466832 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:53:52.897327  466832 ssh_runner.go:195] Run: crio --version
	I0717 19:53:52.924555  466832 ssh_runner.go:195] Run: crio --version
	I0717 19:53:52.953717  466832 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:53:52.954983  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:52.957802  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:52.958237  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:52.958262  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:52.958486  466832 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:53:52.962789  466832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:52.977435  466832 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 19:53:52.978780  466832 kubeadm.go:883] updating cluster {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:53:52.978936  466832 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:53:52.979011  466832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:53.014127  466832 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:53:53.014210  466832 ssh_runner.go:195] Run: which lz4
	I0717 19:53:53.018796  466832 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:53:53.022935  466832 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:53:53.022964  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0717 19:53:54.369236  466832 crio.go:462] duration metric: took 1.350474567s to copy over tarball
	I0717 19:53:54.369321  466832 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:53:56.491406  466832 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.122056323s)
	I0717 19:53:56.491437  466832 crio.go:469] duration metric: took 2.122165543s to extract the tarball
	I0717 19:53:56.491445  466832 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:53:56.528437  466832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:56.571038  466832 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:53:56.571063  466832 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:53:56.571072  466832 kubeadm.go:934] updating node { 192.168.72.104 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:53:56.571208  466832 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-500710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:53:56.571309  466832 ssh_runner.go:195] Run: crio config
	I0717 19:53:56.615483  466832 cni.go:84] Creating CNI manager for ""
	I0717 19:53:56.615510  466832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:53:56.615524  466832 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0717 19:53:56.615556  466832 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-500710 NodeName:newest-cni-500710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:53:56.615741  466832 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-500710"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:53:56.615816  466832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:53:56.626272  466832 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:53:56.626351  466832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:53:56.636146  466832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0717 19:53:56.652734  466832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:53:56.668813  466832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0717 19:53:56.685392  466832 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0717 19:53:56.689464  466832 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:56.701771  466832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:56.827005  466832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:53:56.844817  466832 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710 for IP: 192.168.72.104
	I0717 19:53:56.844840  466832 certs.go:194] generating shared ca certs ...
	I0717 19:53:56.844856  466832 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:56.845014  466832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:53:56.845066  466832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:53:56.845079  466832 certs.go:256] generating profile certs ...
	I0717 19:53:56.845195  466832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key
	I0717 19:53:56.845289  466832 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261
	I0717 19:53:56.845331  466832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key
	I0717 19:53:56.845521  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:53:56.845579  466832 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:53:56.845591  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:53:56.845622  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:53:56.845654  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:53:56.845685  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:53:56.845741  466832 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:53:56.846549  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:53:56.886545  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:53:56.912583  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:53:56.943916  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:53:56.971335  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:53:56.997367  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:53:57.022123  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:53:57.045839  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:53:57.072872  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:53:57.098141  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:53:57.123507  466832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:53:57.146784  466832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:53:57.162966  466832 ssh_runner.go:195] Run: openssl version
	I0717 19:53:57.168588  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:53:57.179246  466832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:53:57.184128  466832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:53:57.184193  466832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:53:57.190288  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:53:57.201742  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:53:57.213411  466832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:57.218528  466832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:57.218586  466832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:57.224603  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:53:57.235664  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:53:57.246529  466832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:53:57.251454  466832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:53:57.251512  466832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:53:57.257575  466832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:53:57.268292  466832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:53:57.273100  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:53:57.279166  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:53:57.285358  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:53:57.291540  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:53:57.297508  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:53:57.303388  466832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:53:57.309502  466832 kubeadm.go:392] StartCluster: {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:53:57.309587  466832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:53:57.309629  466832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:53:57.347684  466832 cri.go:89] found id: ""
	I0717 19:53:57.347759  466832 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:53:57.358090  466832 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:53:57.358111  466832 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:53:57.358159  466832 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:53:57.368885  466832 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:53:57.369417  466832 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-500710" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:53:57.369693  466832 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-500710" cluster setting kubeconfig missing "newest-cni-500710" context setting]
	I0717 19:53:57.370098  466832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:57.371374  466832 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:53:57.380979  466832 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0717 19:53:57.381017  466832 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:53:57.381032  466832 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:53:57.381075  466832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:53:57.420650  466832 cri.go:89] found id: ""
	I0717 19:53:57.420711  466832 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:53:57.438489  466832 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:53:57.449510  466832 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:53:57.449533  466832 kubeadm.go:157] found existing configuration files:
	
	I0717 19:53:57.449604  466832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:53:57.459151  466832 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:53:57.459211  466832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:53:57.469127  466832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:53:57.478615  466832 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:53:57.478667  466832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:53:57.487953  466832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:53:57.496791  466832 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:53:57.496857  466832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:53:57.505806  466832 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:53:57.515301  466832 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:53:57.515356  466832 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:53:57.524530  466832 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:53:57.534485  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:53:57.647977  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:53:58.677520  466832 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.029508399s)
	I0717 19:53:58.677553  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:53:59.124857  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:53:59.332326  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:53:59.393014  466832 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:53:59.393123  466832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:53:59.893749  466832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:54:00.393923  466832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:54:00.893737  466832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:54:00.907793  466832 api_server.go:72] duration metric: took 1.514777695s to wait for apiserver process to appear ...
	I0717 19:54:00.907835  466832 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:54:00.907859  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:03.661943  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:54:03.661987  466832 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:54:03.662008  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:03.718758  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:54:03.718790  466832 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:54:03.908065  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:03.913016  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:54:03.913053  466832 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:54:04.408625  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:04.413462  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:54:04.413488  466832 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:54:04.908024  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:04.919625  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:54:04.919658  466832 api_server.go:103] status: https://192.168.72.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:54:05.408153  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:05.412374  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 200:
	ok
	I0717 19:54:05.419766  466832 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:54:05.419801  466832 api_server.go:131] duration metric: took 4.511958025s to wait for apiserver health ...
	I0717 19:54:05.419814  466832 cni.go:84] Creating CNI manager for ""
	I0717 19:54:05.419824  466832 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:54:05.421781  466832 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:54:05.423250  466832 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:54:05.434481  466832 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:54:05.456762  466832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:54:05.465826  466832 system_pods.go:59] 8 kube-system pods found
	I0717 19:54:05.465876  466832 system_pods.go:61] "coredns-5cfdc65f69-lvmgk" [55cbf8bc-3d0d-4227-bd90-5722a2ffdcfc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:54:05.465887  466832 system_pods.go:61] "etcd-newest-cni-500710" [aacfe3f3-88a1-429e-9251-4084f6f4362d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:54:05.465898  466832 system_pods.go:61] "kube-apiserver-newest-cni-500710" [7d0283f5-faaa-4afb-b75f-7124ca7fe97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:54:05.465907  466832 system_pods.go:61] "kube-controller-manager-newest-cni-500710" [665c82af-b815-4fff-82c6-b66416666be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:54:05.465914  466832 system_pods.go:61] "kube-proxy-lbbdh" [e6a84690-a097-41c1-b8a8-27cbfd532824] Running
	I0717 19:54:05.465924  466832 system_pods.go:61] "kube-scheduler-newest-cni-500710" [e745a4cd-767b-45ec-888c-959d1662201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:54:05.465935  466832 system_pods.go:61] "metrics-server-78fcd8795b-7npzx" [52f2bc9e-3df2-40c7-8ef3-ffaa106bcd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:54:05.465941  466832 system_pods.go:61] "storage-provisioner" [c35177c1-0afe-444c-9214-b57eb332f1a0] Running
	I0717 19:54:05.465950  466832 system_pods.go:74] duration metric: took 9.16285ms to wait for pod list to return data ...
	I0717 19:54:05.465959  466832 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:54:05.473493  466832 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:54:05.473534  466832 node_conditions.go:123] node cpu capacity is 2
	I0717 19:54:05.473552  466832 node_conditions.go:105] duration metric: took 7.58391ms to run NodePressure ...
	I0717 19:54:05.473579  466832 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:54:05.739011  466832 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:54:05.752098  466832 ops.go:34] apiserver oom_adj: -16
	I0717 19:54:05.752128  466832 kubeadm.go:597] duration metric: took 8.394009611s to restartPrimaryControlPlane
	I0717 19:54:05.752138  466832 kubeadm.go:394] duration metric: took 8.442642939s to StartCluster
	I0717 19:54:05.752163  466832 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:54:05.752261  466832 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:54:05.753462  466832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:54:05.753761  466832 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:54:05.753835  466832 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:54:05.753941  466832 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-500710"
	I0717 19:54:05.753966  466832 addons.go:69] Setting dashboard=true in profile "newest-cni-500710"
	I0717 19:54:05.753982  466832 addons.go:69] Setting metrics-server=true in profile "newest-cni-500710"
	I0717 19:54:05.753986  466832 addons.go:69] Setting default-storageclass=true in profile "newest-cni-500710"
	I0717 19:54:05.753977  466832 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-500710"
	W0717 19:54:05.754022  466832 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:54:05.754034  466832 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-500710"
	I0717 19:54:05.754063  466832 host.go:66] Checking if "newest-cni-500710" exists ...
	I0717 19:54:05.754004  466832 config.go:182] Loaded profile config "newest-cni-500710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:54:05.754002  466832 addons.go:234] Setting addon metrics-server=true in "newest-cni-500710"
	W0717 19:54:05.754170  466832 addons.go:243] addon metrics-server should already be in state true
	I0717 19:54:05.754008  466832 addons.go:234] Setting addon dashboard=true in "newest-cni-500710"
	W0717 19:54:05.754200  466832 addons.go:243] addon dashboard should already be in state true
	I0717 19:54:05.754212  466832 host.go:66] Checking if "newest-cni-500710" exists ...
	I0717 19:54:05.754236  466832 host.go:66] Checking if "newest-cni-500710" exists ...
	I0717 19:54:05.754469  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.754469  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.754527  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.754536  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.754585  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.754595  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.754621  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.754638  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.756062  466832 out.go:177] * Verifying Kubernetes components...
	I0717 19:54:05.757452  466832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:54:05.771244  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0717 19:54:05.771750  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.772287  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.772316  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.772695  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.773284  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.773317  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.774814  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0717 19:54:05.775120  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38005
	I0717 19:54:05.775227  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0717 19:54:05.775387  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.775559  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.775580  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.776073  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.776086  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.776092  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.776101  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.776225  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.776242  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.776599  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.776650  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.776675  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.777095  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.777118  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.777300  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:54:05.777643  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.777695  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.780877  466832 addons.go:234] Setting addon default-storageclass=true in "newest-cni-500710"
	W0717 19:54:05.780901  466832 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:54:05.780933  466832 host.go:66] Checking if "newest-cni-500710" exists ...
	I0717 19:54:05.781327  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.781374  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.795660  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46201
	I0717 19:54:05.795999  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0717 19:54:05.796215  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.796505  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.796969  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.796991  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.796969  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.797046  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.797119  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I0717 19:54:05.797304  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0717 19:54:05.797471  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.797561  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.797686  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.797753  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.798005  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.798020  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.798087  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:54:05.798478  466832 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:54:05.798510  466832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:54:05.798789  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.798986  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:54:05.799871  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.799889  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.800186  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.800238  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:54:05.800545  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:54:05.802149  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:54:05.802387  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:54:05.802474  466832 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:54:05.803726  466832 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 19:54:05.803741  466832 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:54:05.803753  466832 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:54:05.803780  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:54:05.803746  466832 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:54:05.804869  466832 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 19:54:05.804959  466832 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:54:05.804978  466832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:54:05.805000  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:54:05.806002  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 19:54:05.806088  466832 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 19:54:05.806112  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:54:05.807524  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.807949  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:54:05.807970  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.808064  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:54:05.808460  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:54:05.808740  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.808832  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:54:05.809069  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:54:05.809387  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:54:05.809829  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:54:05.809854  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.809894  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.810007  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:54:05.810181  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:54:05.810206  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.810295  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:54:05.810395  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:54:05.810477  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:54:05.810528  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:54:05.810609  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:54:05.810677  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:54:05.819210  466832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0717 19:54:05.819633  466832 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:54:05.820067  466832 main.go:141] libmachine: Using API Version  1
	I0717 19:54:05.820088  466832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:54:05.820593  466832 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:54:05.820792  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:54:05.822347  466832 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:54:05.822607  466832 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:54:05.822622  466832 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:54:05.822637  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:54:05.825192  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.825704  466832 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:53:41 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:54:05.825729  466832 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:54:05.825900  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:54:05.826090  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:54:05.826343  466832 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:54:05.826529  466832 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:54:05.985786  466832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:54:06.007019  466832 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:54:06.007111  466832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:54:06.031984  466832 api_server.go:72] duration metric: took 278.184504ms to wait for apiserver process to appear ...
	I0717 19:54:06.032011  466832 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:54:06.032034  466832 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8443/healthz ...
	I0717 19:54:06.038841  466832 api_server.go:279] https://192.168.72.104:8443/healthz returned 200:
	ok
	I0717 19:54:06.040048  466832 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:54:06.040067  466832 api_server.go:131] duration metric: took 8.048652ms to wait for apiserver health ...
	I0717 19:54:06.040075  466832 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:54:06.046211  466832 system_pods.go:59] 8 kube-system pods found
	I0717 19:54:06.046236  466832 system_pods.go:61] "coredns-5cfdc65f69-lvmgk" [55cbf8bc-3d0d-4227-bd90-5722a2ffdcfc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:54:06.046245  466832 system_pods.go:61] "etcd-newest-cni-500710" [aacfe3f3-88a1-429e-9251-4084f6f4362d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:54:06.046255  466832 system_pods.go:61] "kube-apiserver-newest-cni-500710" [7d0283f5-faaa-4afb-b75f-7124ca7fe97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:54:06.046261  466832 system_pods.go:61] "kube-controller-manager-newest-cni-500710" [665c82af-b815-4fff-82c6-b66416666be8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:54:06.046265  466832 system_pods.go:61] "kube-proxy-lbbdh" [e6a84690-a097-41c1-b8a8-27cbfd532824] Running
	I0717 19:54:06.046270  466832 system_pods.go:61] "kube-scheduler-newest-cni-500710" [e745a4cd-767b-45ec-888c-959d1662201f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:54:06.046287  466832 system_pods.go:61] "metrics-server-78fcd8795b-7npzx" [52f2bc9e-3df2-40c7-8ef3-ffaa106bcd68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:54:06.046291  466832 system_pods.go:61] "storage-provisioner" [c35177c1-0afe-444c-9214-b57eb332f1a0] Running
	I0717 19:54:06.046296  466832 system_pods.go:74] duration metric: took 6.21638ms to wait for pod list to return data ...
	I0717 19:54:06.046306  466832 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:54:06.049895  466832 default_sa.go:45] found service account: "default"
	I0717 19:54:06.049916  466832 default_sa.go:55] duration metric: took 3.604415ms for default service account to be created ...
	I0717 19:54:06.049927  466832 kubeadm.go:582] duration metric: took 296.132791ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:54:06.049942  466832 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:54:06.052719  466832 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:54:06.052739  466832 node_conditions.go:123] node cpu capacity is 2
	I0717 19:54:06.052750  466832 node_conditions.go:105] duration metric: took 2.803145ms to run NodePressure ...
	I0717 19:54:06.052762  466832 start.go:241] waiting for startup goroutines ...
	I0717 19:54:06.106640  466832 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:54:06.106663  466832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:54:06.113252  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 19:54:06.113284  466832 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 19:54:06.126527  466832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:54:06.144960  466832 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:54:06.144999  466832 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:54:06.163460  466832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:54:06.175983  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 19:54:06.176019  466832 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 19:54:06.210385  466832 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:54:06.210415  466832 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:54:06.224382  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 19:54:06.224415  466832 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 19:54:06.256238  466832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:54:06.328538  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 19:54:06.328563  466832 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 19:54:06.374038  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 19:54:06.374069  466832 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 19:54:06.495088  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 19:54:06.495121  466832 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 19:54:06.549892  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 19:54:06.549923  466832 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 19:54:06.673857  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 19:54:06.673896  466832 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 19:54:06.755548  466832 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 19:54:06.755573  466832 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 19:54:06.808242  466832 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 19:54:08.016135  466832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.852603765s)
	I0717 19:54:08.016195  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016207  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.016232  466832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.759927799s)
	I0717 19:54:08.016286  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016244  466832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.889681984s)
	I0717 19:54:08.016325  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016303  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.016335  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.016530  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.016545  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.016553  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016553  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.016559  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.016766  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.016783  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.016799  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016799  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.016798  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.016770  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.016818  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.016826  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.016833  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.016840  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.016848  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.016807  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.017125  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.017218  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.017260  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.019163  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.019163  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.019199  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.019209  466832 addons.go:475] Verifying addon metrics-server=true in "newest-cni-500710"
	I0717 19:54:08.025184  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.025204  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.025518  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.025526  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.025538  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.627732  466832 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.819420681s)
	I0717 19:54:08.627797  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.627818  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.628146  466832 main.go:141] libmachine: (newest-cni-500710) DBG | Closing plugin on server side
	I0717 19:54:08.628224  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.628247  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.628262  466832 main.go:141] libmachine: Making call to close driver server
	I0717 19:54:08.628273  466832 main.go:141] libmachine: (newest-cni-500710) Calling .Close
	I0717 19:54:08.628550  466832 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:54:08.628567  466832 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:54:08.630387  466832 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-500710 addons enable metrics-server
	
	I0717 19:54:08.631780  466832 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0717 19:54:08.633090  466832 addons.go:510] duration metric: took 2.879259682s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0717 19:54:08.633132  466832 start.go:246] waiting for cluster config update ...
	I0717 19:54:08.633149  466832 start.go:255] writing updated cluster config ...
	I0717 19:54:08.633516  466832 ssh_runner.go:195] Run: rm -f paused
	I0717 19:54:08.690039  466832 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:54:08.691584  466832 out.go:177] * Done! kubectl is now configured to use "newest-cni-500710" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.030937862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f8188b7-0b47-4af1-9f20-528e7bf3b88b name=/runtime.v1.RuntimeService/Version
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.031857841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53fd03c9-006f-4363-8c16-892e074cea33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.032512134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721246059032488884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53fd03c9-006f-4363-8c16-892e074cea33 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.033471941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daee85c9-2fe2-4598-8d65-21564b392ab2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.033577996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daee85c9-2fe2-4598-8d65-21564b392ab2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.033803407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daee85c9-2fe2-4598-8d65-21564b392ab2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.075807812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10a9d1b8-fee8-4663-8f2d-f3fc2374653d name=/runtime.v1.RuntimeService/Version
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.075900366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10a9d1b8-fee8-4663-8f2d-f3fc2374653d name=/runtime.v1.RuntimeService/Version
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.077284327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14daf59c-bd22-42a9-9f56-cc8e00ad2277 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.077662642Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721246059077640795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14daf59c-bd22-42a9-9f56-cc8e00ad2277 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.078421815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbb0a826-4af8-4a27-89bd-f35b491fc872 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.078473040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbb0a826-4af8-4a27-89bd-f35b491fc872 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.078664323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbb0a826-4af8-4a27-89bd-f35b491fc872 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.087568543Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e616d0d8-ade7-4230-a52a-8ee97dd97c20 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.087778908Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:747e3c04baaf8be3c1846ab9f9aab6c562fc86babedfd29a9141dc6bce79dff7,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-hvknj,Uid:d214e760-d49e-4554-85c2-77e5da1b150f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245113068529320,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-hvknj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d214e760-d49e-4554-85c2-77e5da1b150f,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:32.761413982Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:153a102e-f07b-46b4-a9d0-9e75
4237ca6e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245113003626391,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T19:38:32.694725167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jnwgp,Uid:f86efa81-cbe0-44a7-888f-639af3dc58ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245111531473036,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:31.224329933Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xbtct,Uid:c24ce9ab
-babb-4589-8046-e8e2d4ca68af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245111514028683,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:31.204660220Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&PodSandboxMetadata{Name:kube-proxy-vhjq4,Uid:092af79d-ebc0-4e16-97ef-725195e95344,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245110954643559,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T19:38:30.641422072Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-378944,Uid:14df199c96b83cb67a529e48a55d2c4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090896433816,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.238:2379,kubernetes.io/config.hash: 14df199c96b83cb67a529e48a55d2c4c,kubernetes.io/config.seen: 2024-07-17T19:38:10.435128418Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9a613dfa6983b3c14a990b6c66fb33c37a5
46f230842e06f71d746a484e5d57f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-378944,Uid:9084b0d455367170b4852ba68abb4dc6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090893118486,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.238:8444,kubernetes.io/config.hash: 9084b0d455367170b4852ba68abb4dc6,kubernetes.io/config.seen: 2024-07-17T19:38:10.435135365Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-378944,Uid:b5e71085d4256531f7ac739262d6bfc6,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1721245090891433004,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b5e71085d4256531f7ac739262d6bfc6,kubernetes.io/config.seen: 2024-07-17T19:38:10.435138137Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-378944,Uid:dff9bb6abc876dce8a11c05079b5f227,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721245090886033070,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dff9bb6abc876dce8a11c05079b5f227,kubernetes.io/config.seen: 2024-07-17T19:38:10.435136957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e616d0d8-ade7-4230-a52a-8ee97dd97c20 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.088408510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7252699-03c0-4858-bcaa-166dc98d4542 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.088478325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7252699-03c0-4858-bcaa-166dc98d4542 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.089032840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7252699-03c0-4858-bcaa-166dc98d4542 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.115551173Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ee5223e-705a-47c1-bb00-b3fb3a86ade8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.115629078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ee5223e-705a-47c1-bb00-b3fb3a86ade8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.116570720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=449f0315-b452-4e55-9bb2-61c8d81beb60 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.117179145Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721246059117158711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=449f0315-b452-4e55-9bb2-61c8d81beb60 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.117740461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59f2b1a0-15d5-4c90-97e6-056b6288817d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.117865194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59f2b1a0-15d5-4c90-97e6-056b6288817d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:54:19 default-k8s-diff-port-378944 crio[728]: time="2024-07-17 19:54:19.118265722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a,PodSandboxId:5c849fbf37d24b13d02ec43ea34de4c5bb4900e8df6f47e46f77ddf03ec1bb64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245113136120126,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 153a102e-f07b-46b4-a9d0-9e754237ca6e,},Annotations:map[string]string{io.kubernetes.container.hash: 69d38bc4,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63,PodSandboxId:9cb00855ffe2b7f82615e94a3c1b456857aa3345468448417c99504b1c702562,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112407734192,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xbtct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c24ce9ab-babb-4589-8046-e8e2d4ca68af,},Annotations:map[string]string{io.kubernetes.container.hash: 85329c3f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708,PodSandboxId:921fbf5ac6336ae0391ff236907cd1ebd3f0d7cca3a44bf18428ac9236a36b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245112267949088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jnwgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f86efa81-cbe0-44a7-888f-639af3dc58ad,},Annotations:map[string]string{io.kubernetes.container.hash: 23c240d4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe,PodSandboxId:bf4ce38f928800d6d8e37b8a1f0cda9102a3fe25b1792d8e059a4f8bdcd2b6ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721245111105604307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhjq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 092af79d-ebc0-4e16-97ef-725195e95344,},Annotations:map[string]string{io.kubernetes.container.hash: b6486252,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1,PodSandboxId:20a0dcbc6c82a702bbffb943ebccbfeafc27bdd65a23905cac9c47e872e5dff2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245091175296960,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14df199c96b83cb67a529e48a55d2c4c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c7da9cd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006,PodSandboxId:61ecd10ea0f7930a08c4066cae8f7c7aa4ef8bec03bcc63d7ab0f889f705f989,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245091146781011,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e71085d4256531f7ac739262d6bfc6,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce,PodSandboxId:99bcefef6fff75d34890daf9bb5beef3a88e93a57436480b137af95cd6cd26c4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245091113599938,Labels:map[string]string
{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dff9bb6abc876dce8a11c05079b5f227,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293,PodSandboxId:9a613dfa6983b3c14a990b6c66fb33c37a546f230842e06f71d746a484e5d57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245091040093713,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-378944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9084b0d455367170b4852ba68abb4dc6,},Annotations:map[string]string{io.kubernetes.container.hash: 704f1818,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59f2b1a0-15d5-4c90-97e6-056b6288817d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4ba7515d592d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   5c849fbf37d24       storage-provisioner
	24d47e2333311       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   9cb00855ffe2b       coredns-7db6d8ff4d-xbtct
	7bb1692aa3f9e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   921fbf5ac6336       coredns-7db6d8ff4d-jnwgp
	4dcae6f21a0ff       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   15 minutes ago      Running             kube-proxy                0                   bf4ce38f92880       kube-proxy-vhjq4
	ab2378f4ea657       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   20a0dcbc6c82a       etcd-default-k8s-diff-port-378944
	103abd0d3d14d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   16 minutes ago      Running             kube-scheduler            2                   61ecd10ea0f79       kube-scheduler-default-k8s-diff-port-378944
	cc51d24cdcb7f       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   16 minutes ago      Running             kube-controller-manager   2                   99bcefef6fff7       kube-controller-manager-default-k8s-diff-port-378944
	14b818b853547       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   16 minutes ago      Running             kube-apiserver            2                   9a613dfa6983b       kube-apiserver-default-k8s-diff-port-378944
	
	
	==> coredns [24d47e23333116aece2559a60326fe6a5df5839f93c25004eab27cdb9801dc63] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7bb1692aa3f9e24faa294a181c0c0f64462781685f9eaa9411352e2d25dc4708] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-378944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-378944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=default-k8s-diff-port-378944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:38:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-378944
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:54:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:53:56 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:53:56 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:53:56 +0000   Wed, 17 Jul 2024 19:38:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:53:56 +0000   Wed, 17 Jul 2024 19:38:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.238
	  Hostname:    default-k8s-diff-port-378944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4a42b743f7394d0994c7a7306917821b
	  System UUID:                4a42b743-f739-4d09-94c7-a7306917821b
	  Boot ID:                    a7d2dfb6-f1fc-4381-96be-ccbe07d367bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jnwgp                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-xbtct                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-378944                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-378944             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-378944    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-vhjq4                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-378944             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-hvknj                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-378944 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node default-k8s-diff-port-378944 event: Registered Node default-k8s-diff-port-378944 in Controller
	
	
	==> dmesg <==
	[  +0.039771] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul17 19:33] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.286969] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619288] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.586140] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.055076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067520] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.204277] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.146874] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.328398] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.557310] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.062630] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.987226] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.530871] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320479] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.022456] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 19:38] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.190238] systemd-fstab-generator[3596]: Ignoring "noauto" option for root device
	[  +4.785500] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.787293] systemd-fstab-generator[3916]: Ignoring "noauto" option for root device
	[ +14.743459] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.191358] systemd-fstab-generator[4183]: Ignoring "noauto" option for root device
	[Jul17 19:39] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [ab2378f4ea65709e70c83df2be208d867791b48264944909f45c931238c812b1] <==
	{"level":"info","ts":"2024-07-17T19:38:12.568719Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:12.568763Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:48:12.62476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-07-17T19:48:12.634949Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":720,"took":"9.53543ms","hash":571954282,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2416640,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T19:48:12.635071Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":571954282,"revision":720,"compact-revision":-1}
	{"level":"warn","ts":"2024-07-17T19:53:06.261837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.198083ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5831757731258103319 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1200 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:53:06.262082Z","caller":"traceutil/trace.go:171","msg":"trace[1142943050] linearizableReadLoop","detail":"{readStateIndex:1393; appliedIndex:1392; }","duration":"244.500615ms","start":"2024-07-17T19:53:06.017542Z","end":"2024-07-17T19:53:06.262043Z","steps":["trace[1142943050] 'read index received'  (duration: 113.053022ms)","trace[1142943050] 'applied index is now lower than readState.Index'  (duration: 131.446015ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:53:06.262167Z","caller":"traceutil/trace.go:171","msg":"trace[420610669] transaction","detail":"{read_only:false; response_revision:1202; number_of_response:1; }","duration":"303.427943ms","start":"2024-07-17T19:53:05.95873Z","end":"2024-07-17T19:53:06.262158Z","steps":["trace[420610669] 'process raft request'  (duration: 171.913308ms)","trace[420610669] 'compare'  (duration: 130.002969ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:53:06.262307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:53:05.958714Z","time spent":"303.522184ms","remote":"127.0.0.1:58064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1200 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-17T19:53:06.262647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.092354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-07-17T19:53:06.262702Z","caller":"traceutil/trace.go:171","msg":"trace[76807192] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1202; }","duration":"245.176197ms","start":"2024-07-17T19:53:06.017518Z","end":"2024-07-17T19:53:06.262695Z","steps":["trace[76807192] 'agreement among raft nodes before linearized reading'  (duration: 245.051001ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:53:06.262796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.609837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:53:06.262879Z","caller":"traceutil/trace.go:171","msg":"trace[1275730107] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1202; }","duration":"139.712279ms","start":"2024-07-17T19:53:06.123149Z","end":"2024-07-17T19:53:06.262861Z","steps":["trace[1275730107] 'agreement among raft nodes before linearized reading'  (duration: 139.610497ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:53:12.632245Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-07-17T19:53:12.636178Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":963,"took":"3.652016ms","hash":3575313092,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T19:53:12.636255Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3575313092,"revision":963,"compact-revision":720}
	{"level":"warn","ts":"2024-07-17T19:53:59.088289Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":5831757731258103580,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-17T19:53:59.28491Z","caller":"traceutil/trace.go:171","msg":"trace[1067941006] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"714.297671ms","start":"2024-07-17T19:53:58.570568Z","end":"2024-07-17T19:53:59.284866Z","steps":["trace[1067941006] 'process raft request'  (duration: 714.099606ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:53:59.285273Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:53:58.570546Z","time spent":"714.502506ms","remote":"127.0.0.1:58064","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1245 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-17T19:53:59.285415Z","caller":"traceutil/trace.go:171","msg":"trace[1432678893] linearizableReadLoop","detail":"{readStateIndex:1449; appliedIndex:1449; }","duration":"697.380376ms","start":"2024-07-17T19:53:58.588022Z","end":"2024-07-17T19:53:59.285402Z","steps":["trace[1432678893] 'read index received'  (duration: 697.375329ms)","trace[1432678893] 'applied index is now lower than readState.Index'  (duration: 4.029µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:53:59.28565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.767625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:53:59.285722Z","caller":"traceutil/trace.go:171","msg":"trace[861666241] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1246; }","duration":"164.877063ms","start":"2024-07-17T19:53:59.120831Z","end":"2024-07-17T19:53:59.285708Z","steps":["trace[861666241] 'agreement among raft nodes before linearized reading'  (duration: 164.667746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:53:59.285909Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"697.912074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:53:59.285966Z","caller":"traceutil/trace.go:171","msg":"trace[682763294] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1246; }","duration":"697.995778ms","start":"2024-07-17T19:53:58.587961Z","end":"2024-07-17T19:53:59.285957Z","steps":["trace[682763294] 'agreement among raft nodes before linearized reading'  (duration: 697.920369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:53:59.286002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T19:53:58.587882Z","time spent":"698.10911ms","remote":"127.0.0.1:57926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 19:54:19 up 21 min,  0 users,  load average: 0.06, 0.13, 0.16
	Linux default-k8s-diff-port-378944 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14b818b853547834df6b166294446b5c6d0222f3b91252733aad9621d70b1293] <==
	I0717 19:51:15.057691       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:51:15.059456       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:51:15.059555       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:51:15.059564       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:53:14.061664       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:53:14.061774       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:53:15.062832       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:53:15.062923       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:53:15.062932       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:53:15.063029       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:53:15.063106       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:53:15.064302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:53:59.287066       1 trace.go:236] Trace[1798686878]: "Update" accept:application/json, */*,audit-id:abb225cf-5d7a-451d-9715-c7eac32cf928,client:192.168.50.238,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2024 19:53:58.567) (total time: 719ms):
	Trace[1798686878]: ["GuaranteedUpdate etcd3" audit-id:abb225cf-5d7a-451d-9715-c7eac32cf928,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 718ms (19:53:58.568)
	Trace[1798686878]:  ---"Txn call completed" 717ms (19:53:59.286)]
	Trace[1798686878]: [719.367867ms] [719.367867ms] END
	W0717 19:54:15.063641       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:54:15.063773       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:54:15.063800       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:54:15.065008       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:54:15.065072       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:54:15.065105       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [cc51d24cdcb7f5c8c02c1a46f8e9c8b705df6afa70527e1ff4165d5ea670bdce] <==
	I0717 19:48:30.845954       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:49:00.351719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:49:00.855134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:49:30.357267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:49:30.863951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:49:42.574639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="271.545µs"
	I0717 19:49:56.572437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="80.253µs"
	E0717 19:50:00.363308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:50:00.872517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:50:30.368696       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:50:30.882559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:00.373976       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:51:00.889910       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:30.379307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:51:30.900844       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:00.384610       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:52:00.909970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:30.390474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:52:30.921116       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:53:00.397037       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:53:00.932569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:53:30.402680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:53:30.941395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:54:00.410088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:54:00.949617       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4dcae6f21a0ff5d48bf1935d3e99b48c424f21734057e63df951a3164da371fe] <==
	I0717 19:38:31.509339       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:38:31.525984       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.238"]
	I0717 19:38:31.577316       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 19:38:31.577368       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:38:31.577440       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:38:31.588554       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:38:31.588757       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:38:31.588769       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:38:31.590694       1 config.go:192] "Starting service config controller"
	I0717 19:38:31.590728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:38:31.590751       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:38:31.590754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:38:31.591328       1 config.go:319] "Starting node config controller"
	I0717 19:38:31.591354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:38:31.691374       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:38:31.691444       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:38:31.691661       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [103abd0d3d14d7c5b5011c6dc3e71bc8bd27babc9df0a8fea92d53e6c6206006] <==
	W0717 19:38:14.089944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:14.089999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.090027       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.089985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:14.939127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:14.939180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:15.033872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:38:15.033916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 19:38:15.048767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:38:15.048816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:38:15.130087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:38:15.130139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:15.153539       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:38:15.153584       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:38:15.176961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:15.177007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:15.210548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 19:38:15.210596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 19:38:15.243304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:15.243357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 19:38:15.326176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:15.326277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:15.337720       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:38:15.337812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 19:38:17.682470       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:52:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:52:16.572036    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:52:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:52:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:52:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:52:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:52:20 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:52:20.553716    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:52:32 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:52:32.553012    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:52:44 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:52:44.552271    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:52:55 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:52:55.552750    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:53:08 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:08.552075    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:53:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:16.569991    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:53:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:53:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:53:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:53:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:53:20 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:20.552029    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:53:34 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:34.552521    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:53:46 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:46.552586    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:53:59 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:53:59.553140    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:54:13 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:54:13.552456    3923 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvknj" podUID="d214e760-d49e-4554-85c2-77e5da1b150f"
	Jul 17 19:54:16 default-k8s-diff-port-378944 kubelet[3923]: E0717 19:54:16.569292    3923 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:54:16 default-k8s-diff-port-378944 kubelet[3923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:54:16 default-k8s-diff-port-378944 kubelet[3923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:54:16 default-k8s-diff-port-378944 kubelet[3923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:54:16 default-k8s-diff-port-378944 kubelet[3923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [e4ba7515d592da31a2b4c4476e465d890e7aa23e2f73da3630ba154b0962ec7a] <==
	I0717 19:38:33.238314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:38:33.247911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:38:33.247982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:38:33.271003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:38:33.274573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24!
	I0717 19:38:33.272401       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3646205-7ea5-44df-80a4-502f2d564366", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24 became leader
	I0717 19:38:33.375461       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-378944_a99e1451-b1f4-4720-b401-bbb284e90d24!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hvknj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj: exit status 1 (58.334089ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hvknj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-378944 describe pod metrics-server-569cc877fc-hvknj: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (400.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (299.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637675 -n embed-certs-637675
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:53:19.044310809 +0000 UTC m=+6655.361328452
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-637675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-637675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.702µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-637675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-637675 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-637675 logs -n 25: (1.220820089s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC | 17 Jul 24 19:52 UTC |
	| start   | -p newest-cni-500710 --memory=2200 --alsologtostderr   | newest-cni-500710            | jenkins | v1.33.1 | 17 Jul 24 19:52 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:53 UTC | 17 Jul 24 19:53 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:52:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:52:34.767774  465898 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:52:34.767999  465898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:52:34.768007  465898 out.go:304] Setting ErrFile to fd 2...
	I0717 19:52:34.768010  465898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:52:34.768198  465898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:52:34.768893  465898 out.go:298] Setting JSON to false
	I0717 19:52:34.770004  465898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12898,"bootTime":1721233057,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:52:34.770072  465898 start.go:139] virtualization: kvm guest
	I0717 19:52:34.772405  465898 out.go:177] * [newest-cni-500710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:52:34.773780  465898 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:52:34.773788  465898 notify.go:220] Checking for updates...
	I0717 19:52:34.776366  465898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:52:34.777750  465898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:52:34.779043  465898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:34.780277  465898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:52:34.781589  465898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:52:34.783352  465898 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:52:34.783466  465898 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:52:34.783580  465898 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:52:34.783697  465898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:52:34.821607  465898 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:52:34.822903  465898 start.go:297] selected driver: kvm2
	I0717 19:52:34.822927  465898 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:52:34.822940  465898 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:52:34.823612  465898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:52:34.823719  465898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:52:34.839535  465898 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:52:34.839582  465898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 19:52:34.839615  465898 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 19:52:34.839861  465898 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:52:34.839923  465898 cni.go:84] Creating CNI manager for ""
	I0717 19:52:34.839942  465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:52:34.839959  465898 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:52:34.840050  465898 start.go:340] cluster config:
	{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:52:34.840161  465898 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:52:34.842355  465898 out.go:177] * Starting "newest-cni-500710" primary control-plane node in "newest-cni-500710" cluster
	I0717 19:52:34.843725  465898 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:52:34.843767  465898 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:52:34.843779  465898 cache.go:56] Caching tarball of preloaded images
	I0717 19:52:34.843902  465898 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:52:34.843933  465898 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:52:34.844059  465898 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:52:34.844100  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json: {Name:mk20dfee504dbf17cdf63c89bd6f3d65ee6f5a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:52:34.844341  465898 start.go:360] acquireMachinesLock for newest-cni-500710: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:52:34.844376  465898 start.go:364] duration metric: took 19.479µs to acquireMachinesLock for "newest-cni-500710"
	I0717 19:52:34.844396  465898 start.go:93] Provisioning new machine with config: &{Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:52:34.844456  465898 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:52:34.846183  465898 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:52:34.846332  465898 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:52:34.846366  465898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:52:34.861330  465898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0717 19:52:34.861788  465898 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:52:34.862363  465898 main.go:141] libmachine: Using API Version  1
	I0717 19:52:34.862389  465898 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:52:34.862698  465898 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:52:34.862930  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:34.863105  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:34.863254  465898 start.go:159] libmachine.API.Create for "newest-cni-500710" (driver="kvm2")
	I0717 19:52:34.863282  465898 client.go:168] LocalClient.Create starting
	I0717 19:52:34.863321  465898 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem
	I0717 19:52:34.863361  465898 main.go:141] libmachine: Decoding PEM data...
	I0717 19:52:34.863383  465898 main.go:141] libmachine: Parsing certificate...
	I0717 19:52:34.863458  465898 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem
	I0717 19:52:34.863490  465898 main.go:141] libmachine: Decoding PEM data...
	I0717 19:52:34.863509  465898 main.go:141] libmachine: Parsing certificate...
	I0717 19:52:34.863532  465898 main.go:141] libmachine: Running pre-create checks...
	I0717 19:52:34.863547  465898 main.go:141] libmachine: (newest-cni-500710) Calling .PreCreateCheck
	I0717 19:52:34.863919  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:34.864367  465898 main.go:141] libmachine: Creating machine...
	I0717 19:52:34.864386  465898 main.go:141] libmachine: (newest-cni-500710) Calling .Create
	I0717 19:52:34.864526  465898 main.go:141] libmachine: (newest-cni-500710) Creating KVM machine...
	I0717 19:52:34.865881  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found existing default KVM network
	I0717 19:52:34.867212  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.867059  465921 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:23:3c:87} reservation:<nil>}
	I0717 19:52:34.868157  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.868074  465921 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:86:02} reservation:<nil>}
	I0717 19:52:34.868961  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.868900  465921 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b5:5a:39} reservation:<nil>}
	I0717 19:52:34.870102  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.870025  465921 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5760}
	I0717 19:52:34.870140  465898 main.go:141] libmachine: (newest-cni-500710) DBG | created network xml: 
	I0717 19:52:34.870155  465898 main.go:141] libmachine: (newest-cni-500710) DBG | <network>
	I0717 19:52:34.870166  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <name>mk-newest-cni-500710</name>
	I0717 19:52:34.870176  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <dns enable='no'/>
	I0717 19:52:34.870206  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   
	I0717 19:52:34.870227  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 19:52:34.870246  465898 main.go:141] libmachine: (newest-cni-500710) DBG |     <dhcp>
	I0717 19:52:34.870258  465898 main.go:141] libmachine: (newest-cni-500710) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 19:52:34.870268  465898 main.go:141] libmachine: (newest-cni-500710) DBG |     </dhcp>
	I0717 19:52:34.870275  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   </ip>
	I0717 19:52:34.870283  465898 main.go:141] libmachine: (newest-cni-500710) DBG |   
	I0717 19:52:34.870291  465898 main.go:141] libmachine: (newest-cni-500710) DBG | </network>
	I0717 19:52:34.870301  465898 main.go:141] libmachine: (newest-cni-500710) DBG | 
	I0717 19:52:34.875809  465898 main.go:141] libmachine: (newest-cni-500710) DBG | trying to create private KVM network mk-newest-cni-500710 192.168.72.0/24...
	I0717 19:52:34.949864  465898 main.go:141] libmachine: (newest-cni-500710) DBG | private KVM network mk-newest-cni-500710 192.168.72.0/24 created
	I0717 19:52:34.949914  465898 main.go:141] libmachine: (newest-cni-500710) Setting up store path in /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 ...
	I0717 19:52:34.949940  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:34.949850  465921 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:34.949952  465898 main.go:141] libmachine: (newest-cni-500710) Building disk image from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:52:34.950054  465898 main.go:141] libmachine: (newest-cni-500710) Downloading /home/jenkins/minikube-integration/19282-392903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:52:35.243341  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.243154  465921 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa...
	I0717 19:52:35.501920  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.501762  465921 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/newest-cni-500710.rawdisk...
	I0717 19:52:35.501957  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Writing magic tar header
	I0717 19:52:35.501995  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Writing SSH key tar header
	I0717 19:52:35.502016  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:35.501914  465921 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 ...
	I0717 19:52:35.502034  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710 (perms=drwx------)
	I0717 19:52:35.502056  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710
	I0717 19:52:35.502072  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube/machines
	I0717 19:52:35.502081  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:52:35.502092  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:52:35.502107  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903/.minikube (perms=drwxr-xr-x)
	I0717 19:52:35.502121  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19282-392903
	I0717 19:52:35.502151  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:52:35.502177  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration/19282-392903 (perms=drwxrwxr-x)
	I0717 19:52:35.502190  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:52:35.502202  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Checking permissions on dir: /home
	I0717 19:52:35.502213  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Skipping /home - not owner
	I0717 19:52:35.502232  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:52:35.502242  465898 main.go:141] libmachine: (newest-cni-500710) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:52:35.502249  465898 main.go:141] libmachine: (newest-cni-500710) Creating domain...
	I0717 19:52:35.503265  465898 main.go:141] libmachine: (newest-cni-500710) define libvirt domain using xml: 
	I0717 19:52:35.503287  465898 main.go:141] libmachine: (newest-cni-500710) <domain type='kvm'>
	I0717 19:52:35.503295  465898 main.go:141] libmachine: (newest-cni-500710)   <name>newest-cni-500710</name>
	I0717 19:52:35.503300  465898 main.go:141] libmachine: (newest-cni-500710)   <memory unit='MiB'>2200</memory>
	I0717 19:52:35.503305  465898 main.go:141] libmachine: (newest-cni-500710)   <vcpu>2</vcpu>
	I0717 19:52:35.503310  465898 main.go:141] libmachine: (newest-cni-500710)   <features>
	I0717 19:52:35.503316  465898 main.go:141] libmachine: (newest-cni-500710)     <acpi/>
	I0717 19:52:35.503323  465898 main.go:141] libmachine: (newest-cni-500710)     <apic/>
	I0717 19:52:35.503331  465898 main.go:141] libmachine: (newest-cni-500710)     <pae/>
	I0717 19:52:35.503341  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503350  465898 main.go:141] libmachine: (newest-cni-500710)   </features>
	I0717 19:52:35.503360  465898 main.go:141] libmachine: (newest-cni-500710)   <cpu mode='host-passthrough'>
	I0717 19:52:35.503371  465898 main.go:141] libmachine: (newest-cni-500710)   
	I0717 19:52:35.503380  465898 main.go:141] libmachine: (newest-cni-500710)   </cpu>
	I0717 19:52:35.503416  465898 main.go:141] libmachine: (newest-cni-500710)   <os>
	I0717 19:52:35.503439  465898 main.go:141] libmachine: (newest-cni-500710)     <type>hvm</type>
	I0717 19:52:35.503449  465898 main.go:141] libmachine: (newest-cni-500710)     <boot dev='cdrom'/>
	I0717 19:52:35.503459  465898 main.go:141] libmachine: (newest-cni-500710)     <boot dev='hd'/>
	I0717 19:52:35.503473  465898 main.go:141] libmachine: (newest-cni-500710)     <bootmenu enable='no'/>
	I0717 19:52:35.503483  465898 main.go:141] libmachine: (newest-cni-500710)   </os>
	I0717 19:52:35.503491  465898 main.go:141] libmachine: (newest-cni-500710)   <devices>
	I0717 19:52:35.503536  465898 main.go:141] libmachine: (newest-cni-500710)     <disk type='file' device='cdrom'>
	I0717 19:52:35.503560  465898 main.go:141] libmachine: (newest-cni-500710)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/boot2docker.iso'/>
	I0717 19:52:35.503574  465898 main.go:141] libmachine: (newest-cni-500710)       <target dev='hdc' bus='scsi'/>
	I0717 19:52:35.503611  465898 main.go:141] libmachine: (newest-cni-500710)       <readonly/>
	I0717 19:52:35.503623  465898 main.go:141] libmachine: (newest-cni-500710)     </disk>
	I0717 19:52:35.503635  465898 main.go:141] libmachine: (newest-cni-500710)     <disk type='file' device='disk'>
	I0717 19:52:35.503647  465898 main.go:141] libmachine: (newest-cni-500710)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:52:35.503663  465898 main.go:141] libmachine: (newest-cni-500710)       <source file='/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/newest-cni-500710.rawdisk'/>
	I0717 19:52:35.503676  465898 main.go:141] libmachine: (newest-cni-500710)       <target dev='hda' bus='virtio'/>
	I0717 19:52:35.503686  465898 main.go:141] libmachine: (newest-cni-500710)     </disk>
	I0717 19:52:35.503695  465898 main.go:141] libmachine: (newest-cni-500710)     <interface type='network'>
	I0717 19:52:35.503710  465898 main.go:141] libmachine: (newest-cni-500710)       <source network='mk-newest-cni-500710'/>
	I0717 19:52:35.503721  465898 main.go:141] libmachine: (newest-cni-500710)       <model type='virtio'/>
	I0717 19:52:35.503732  465898 main.go:141] libmachine: (newest-cni-500710)     </interface>
	I0717 19:52:35.503744  465898 main.go:141] libmachine: (newest-cni-500710)     <interface type='network'>
	I0717 19:52:35.503756  465898 main.go:141] libmachine: (newest-cni-500710)       <source network='default'/>
	I0717 19:52:35.503786  465898 main.go:141] libmachine: (newest-cni-500710)       <model type='virtio'/>
	I0717 19:52:35.503809  465898 main.go:141] libmachine: (newest-cni-500710)     </interface>
	I0717 19:52:35.503820  465898 main.go:141] libmachine: (newest-cni-500710)     <serial type='pty'>
	I0717 19:52:35.503829  465898 main.go:141] libmachine: (newest-cni-500710)       <target port='0'/>
	I0717 19:52:35.503834  465898 main.go:141] libmachine: (newest-cni-500710)     </serial>
	I0717 19:52:35.503843  465898 main.go:141] libmachine: (newest-cni-500710)     <console type='pty'>
	I0717 19:52:35.503853  465898 main.go:141] libmachine: (newest-cni-500710)       <target type='serial' port='0'/>
	I0717 19:52:35.503863  465898 main.go:141] libmachine: (newest-cni-500710)     </console>
	I0717 19:52:35.503875  465898 main.go:141] libmachine: (newest-cni-500710)     <rng model='virtio'>
	I0717 19:52:35.503885  465898 main.go:141] libmachine: (newest-cni-500710)       <backend model='random'>/dev/random</backend>
	I0717 19:52:35.503905  465898 main.go:141] libmachine: (newest-cni-500710)     </rng>
	I0717 19:52:35.503922  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503934  465898 main.go:141] libmachine: (newest-cni-500710)     
	I0717 19:52:35.503939  465898 main.go:141] libmachine: (newest-cni-500710)   </devices>
	I0717 19:52:35.503945  465898 main.go:141] libmachine: (newest-cni-500710) </domain>
	I0717 19:52:35.503952  465898 main.go:141] libmachine: (newest-cni-500710) 
	I0717 19:52:35.508989  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:49:c2:ce in network default
	I0717 19:52:35.509758  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring networks are active...
	I0717 19:52:35.509784  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:35.510660  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring network default is active
	I0717 19:52:35.510963  465898 main.go:141] libmachine: (newest-cni-500710) Ensuring network mk-newest-cni-500710 is active
	I0717 19:52:35.511522  465898 main.go:141] libmachine: (newest-cni-500710) Getting domain xml...
	I0717 19:52:35.512137  465898 main.go:141] libmachine: (newest-cni-500710) Creating domain...
	I0717 19:52:36.777503  465898 main.go:141] libmachine: (newest-cni-500710) Waiting to get IP...
	I0717 19:52:36.778265  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:36.778652  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:36.778696  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:36.778626  465921 retry.go:31] will retry after 214.377066ms: waiting for machine to come up
	I0717 19:52:36.995120  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:36.995606  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:36.995659  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:36.995566  465921 retry.go:31] will retry after 343.353816ms: waiting for machine to come up
	I0717 19:52:37.340150  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:37.340665  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:37.340699  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:37.340606  465921 retry.go:31] will retry after 375.581243ms: waiting for machine to come up
	I0717 19:52:37.717883  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:37.718421  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:37.718452  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:37.718362  465921 retry.go:31] will retry after 549.702915ms: waiting for machine to come up
	I0717 19:52:38.270051  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:38.270510  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:38.270539  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:38.270466  465921 retry.go:31] will retry after 696.630007ms: waiting for machine to come up
	I0717 19:52:38.968153  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:38.968606  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:38.968670  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:38.968553  465921 retry.go:31] will retry after 729.435483ms: waiting for machine to come up
	I0717 19:52:39.699220  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:39.699796  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:39.699827  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:39.699743  465921 retry.go:31] will retry after 1.069404688s: waiting for machine to come up
	I0717 19:52:40.770329  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:40.770733  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:40.770776  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:40.770684  465921 retry.go:31] will retry after 1.324069044s: waiting for machine to come up
	I0717 19:52:42.097255  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:42.097697  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:42.097730  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:42.097643  465921 retry.go:31] will retry after 1.572231128s: waiting for machine to come up
	I0717 19:52:43.671924  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:43.672506  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:43.672563  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:43.672438  465921 retry.go:31] will retry after 2.283478143s: waiting for machine to come up
	I0717 19:52:45.957637  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:45.958153  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:45.958175  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:45.958081  465921 retry.go:31] will retry after 2.813092288s: waiting for machine to come up
	I0717 19:52:48.775078  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:48.775586  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:48.775613  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:48.775531  465921 retry.go:31] will retry after 2.367550426s: waiting for machine to come up
	I0717 19:52:51.144282  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:51.144844  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find current IP address of domain newest-cni-500710 in network mk-newest-cni-500710
	I0717 19:52:51.144877  465898 main.go:141] libmachine: (newest-cni-500710) DBG | I0717 19:52:51.144765  465921 retry.go:31] will retry after 3.518690572s: waiting for machine to come up
	I0717 19:52:54.666084  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.666573  465898 main.go:141] libmachine: (newest-cni-500710) Found IP for machine: 192.168.72.104
	I0717 19:52:54.666624  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has current primary IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.666631  465898 main.go:141] libmachine: (newest-cni-500710) Reserving static IP address...
	I0717 19:52:54.666909  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find host DHCP lease matching {name: "newest-cni-500710", mac: "52:54:00:9b:88:f9", ip: "192.168.72.104"} in network mk-newest-cni-500710
	I0717 19:52:54.745521  465898 main.go:141] libmachine: (newest-cni-500710) Reserved static IP address: 192.168.72.104
	I0717 19:52:54.745559  465898 main.go:141] libmachine: (newest-cni-500710) Waiting for SSH to be available...
	I0717 19:52:54.745569  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Getting to WaitForSSH function...
	I0717 19:52:54.748420  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:54.748703  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710
	I0717 19:52:54.748741  465898 main.go:141] libmachine: (newest-cni-500710) DBG | unable to find defined IP address of network mk-newest-cni-500710 interface with MAC address 52:54:00:9b:88:f9
	I0717 19:52:54.748890  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH client type: external
	I0717 19:52:54.748916  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa (-rw-------)
	I0717 19:52:54.748965  465898 main.go:141] libmachine: (newest-cni-500710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:52:54.748980  465898 main.go:141] libmachine: (newest-cni-500710) DBG | About to run SSH command:
	I0717 19:52:54.749019  465898 main.go:141] libmachine: (newest-cni-500710) DBG | exit 0
	I0717 19:52:54.753184  465898 main.go:141] libmachine: (newest-cni-500710) DBG | SSH cmd err, output: exit status 255: 
	I0717 19:52:54.753208  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 19:52:54.753218  465898 main.go:141] libmachine: (newest-cni-500710) DBG | command : exit 0
	I0717 19:52:54.753229  465898 main.go:141] libmachine: (newest-cni-500710) DBG | err     : exit status 255
	I0717 19:52:54.753239  465898 main.go:141] libmachine: (newest-cni-500710) DBG | output  : 
	I0717 19:52:57.756036  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Getting to WaitForSSH function...
	I0717 19:52:57.758616  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.759012  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.759046  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.759191  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH client type: external
	I0717 19:52:57.759219  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa (-rw-------)
	I0717 19:52:57.759267  465898 main.go:141] libmachine: (newest-cni-500710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:52:57.759286  465898 main.go:141] libmachine: (newest-cni-500710) DBG | About to run SSH command:
	I0717 19:52:57.759299  465898 main.go:141] libmachine: (newest-cni-500710) DBG | exit 0
	I0717 19:52:57.884866  465898 main.go:141] libmachine: (newest-cni-500710) DBG | SSH cmd err, output: <nil>: 
	I0717 19:52:57.885287  465898 main.go:141] libmachine: (newest-cni-500710) KVM machine creation complete!
	I0717 19:52:57.885598  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:57.886228  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:57.886450  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:57.886644  465898 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:52:57.886660  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetState
	I0717 19:52:57.888162  465898 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:52:57.888180  465898 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:52:57.888187  465898 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:52:57.888192  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:57.890403  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.890747  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.890776  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.890901  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:57.891056  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.891265  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.891440  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:57.891622  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:57.891829  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:57.891842  465898 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:52:57.991867  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:52:57.991887  465898 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:52:57.991895  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:57.994569  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.994918  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:57.994949  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:57.995096  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:57.995296  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.995481  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:57.995626  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:57.995840  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:57.996033  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:57.996046  465898 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:52:58.097422  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 19:52:58.097505  465898 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:52:58.097516  465898 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:52:58.097538  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.097810  465898 buildroot.go:166] provisioning hostname "newest-cni-500710"
	I0717 19:52:58.097837  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.098041  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.100592  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.100950  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.100968  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.101147  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.101343  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.101484  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.101627  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.101793  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.102037  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.102052  465898 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-500710 && echo "newest-cni-500710" | sudo tee /etc/hostname
	I0717 19:52:58.225128  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-500710
	
	I0717 19:52:58.225183  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.228010  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.228335  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.228364  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.228575  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.228785  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.228958  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.229111  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.229298  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.229520  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.229545  465898 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-500710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-500710/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-500710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:52:58.342896  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:52:58.342938  465898 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:52:58.342987  465898 buildroot.go:174] setting up certificates
	I0717 19:52:58.343005  465898 provision.go:84] configureAuth start
	I0717 19:52:58.343022  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetMachineName
	I0717 19:52:58.343341  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:58.346276  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.346751  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.346784  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.346889  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.349044  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.349386  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.349411  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.349642  465898 provision.go:143] copyHostCerts
	I0717 19:52:58.349722  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:52:58.349749  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:52:58.349836  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:52:58.349968  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:52:58.349978  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:52:58.350018  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:52:58.350115  465898 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:52:58.350125  465898 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:52:58.350158  465898 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:52:58.350238  465898 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.newest-cni-500710 san=[127.0.0.1 192.168.72.104 localhost minikube newest-cni-500710]
	I0717 19:52:58.503609  465898 provision.go:177] copyRemoteCerts
	I0717 19:52:58.503684  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:52:58.503717  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.506281  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.506750  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.506778  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.507009  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.507231  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.507395  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.507575  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:58.587267  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:52:58.612001  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:52:58.635418  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:52:58.660399  465898 provision.go:87] duration metric: took 317.376332ms to configureAuth
	I0717 19:52:58.660432  465898 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:52:58.660689  465898 config.go:182] Loaded profile config "newest-cni-500710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:52:58.660767  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.663622  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.663912  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.663935  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.664128  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.664340  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.664520  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.664669  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.664911  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:58.665111  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:58.665132  465898 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:52:58.926130  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:52:58.926163  465898 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:52:58.926175  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetURL
	I0717 19:52:58.927502  465898 main.go:141] libmachine: (newest-cni-500710) DBG | Using libvirt version 6000000
	I0717 19:52:58.929908  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.930294  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.930325  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.930510  465898 main.go:141] libmachine: Docker is up and running!
	I0717 19:52:58.930524  465898 main.go:141] libmachine: Reticulating splines...
	I0717 19:52:58.930531  465898 client.go:171] duration metric: took 24.067239354s to LocalClient.Create
	I0717 19:52:58.930555  465898 start.go:167] duration metric: took 24.067302202s to libmachine.API.Create "newest-cni-500710"
	I0717 19:52:58.930569  465898 start.go:293] postStartSetup for "newest-cni-500710" (driver="kvm2")
	I0717 19:52:58.930585  465898 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:52:58.930603  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:58.930857  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:52:58.930888  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:58.932791  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.933115  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:58.933144  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:58.933261  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:58.933455  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:58.933596  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:58.933741  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.017055  465898 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:52:59.022210  465898 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:52:59.022243  465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:52:59.022315  465898 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:52:59.022390  465898 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:52:59.022536  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:52:59.033029  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:52:59.056620  465898 start.go:296] duration metric: took 126.019682ms for postStartSetup
	I0717 19:52:59.056673  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetConfigRaw
	I0717 19:52:59.057273  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:59.059994  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.060342  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.060373  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.060656  465898 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/config.json ...
	I0717 19:52:59.060822  465898 start.go:128] duration metric: took 24.216348379s to createHost
	I0717 19:52:59.060845  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.063393  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.063716  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.063754  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.063877  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.064084  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.064258  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.064419  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.064619  465898 main.go:141] libmachine: Using SSH client type: native
	I0717 19:52:59.064813  465898 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0717 19:52:59.064826  465898 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:52:59.165476  465898 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721245979.141065718
	
	I0717 19:52:59.165498  465898 fix.go:216] guest clock: 1721245979.141065718
	I0717 19:52:59.165506  465898 fix.go:229] Guest: 2024-07-17 19:52:59.141065718 +0000 UTC Remote: 2024-07-17 19:52:59.060832447 +0000 UTC m=+24.330750472 (delta=80.233271ms)
	I0717 19:52:59.165539  465898 fix.go:200] guest clock delta is within tolerance: 80.233271ms
	I0717 19:52:59.165544  465898 start.go:83] releasing machines lock for "newest-cni-500710", held for 24.32115845s
	I0717 19:52:59.165562  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.165824  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:52:59.168636  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.169031  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.169060  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.169185  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.169779  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.169974  465898 main.go:141] libmachine: (newest-cni-500710) Calling .DriverName
	I0717 19:52:59.170098  465898 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:52:59.170143  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.170197  465898 ssh_runner.go:195] Run: cat /version.json
	I0717 19:52:59.170219  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHHostname
	I0717 19:52:59.173096  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173234  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173500  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.173527  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173733  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.173834  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:52:59.173868  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:52:59.173897  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.174022  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHPort
	I0717 19:52:59.174185  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHKeyPath
	I0717 19:52:59.174211  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.174348  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetSSHUsername
	I0717 19:52:59.174388  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.174458  465898 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/newest-cni-500710/id_rsa Username:docker}
	I0717 19:52:59.249657  465898 ssh_runner.go:195] Run: systemctl --version
	I0717 19:52:59.277869  465898 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:52:59.441843  465898 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:52:59.448001  465898 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:52:59.448078  465898 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:52:59.468227  465898 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:52:59.468265  465898 start.go:495] detecting cgroup driver to use...
	I0717 19:52:59.468347  465898 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:52:59.491442  465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:52:59.506251  465898 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:52:59.506348  465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:52:59.519939  465898 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:52:59.533404  465898 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:52:59.654673  465898 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:52:59.806971  465898 docker.go:233] disabling docker service ...
	I0717 19:52:59.807068  465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:52:59.821705  465898 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:52:59.835046  465898 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:52:59.982140  465898 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:53:00.110908  465898 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:53:00.126060  465898 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:53:00.145395  465898 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:53:00.145472  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.157222  465898 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:53:00.157298  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.167978  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.179059  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.190133  465898 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:53:00.201263  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.212434  465898 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.230560  465898 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:53:00.241400  465898 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:53:00.250916  465898 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:53:00.250963  465898 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:53:00.263667  465898 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:53:00.273256  465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:00.392220  465898 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:53:00.554438  465898 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:53:00.554529  465898 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:53:00.560098  465898 start.go:563] Will wait 60s for crictl version
	I0717 19:53:00.560155  465898 ssh_runner.go:195] Run: which crictl
	I0717 19:53:00.564406  465898 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:53:00.603169  465898 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:53:00.603264  465898 ssh_runner.go:195] Run: crio --version
	I0717 19:53:00.634731  465898 ssh_runner.go:195] Run: crio --version
	I0717 19:53:00.666668  465898 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:53:00.667952  465898 main.go:141] libmachine: (newest-cni-500710) Calling .GetIP
	I0717 19:53:00.670693  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:00.671029  465898 main.go:141] libmachine: (newest-cni-500710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:88:f9", ip: ""} in network mk-newest-cni-500710: {Iface:virbr4 ExpiryTime:2024-07-17 20:52:49 +0000 UTC Type:0 Mac:52:54:00:9b:88:f9 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:newest-cni-500710 Clientid:01:52:54:00:9b:88:f9}
	I0717 19:53:00.671050  465898 main.go:141] libmachine: (newest-cni-500710) DBG | domain newest-cni-500710 has defined IP address 192.168.72.104 and MAC address 52:54:00:9b:88:f9 in network mk-newest-cni-500710
	I0717 19:53:00.671291  465898 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:53:00.675824  465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:00.690521  465898 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 19:53:00.691904  465898 kubeadm.go:883] updating cluster {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:53:00.692059  465898 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:53:00.692134  465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:00.726968  465898 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:53:00.727059  465898 ssh_runner.go:195] Run: which lz4
	I0717 19:53:00.731300  465898 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:53:00.735768  465898 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:53:00.735812  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0717 19:53:02.163077  465898 crio.go:462] duration metric: took 1.431804194s to copy over tarball
	I0717 19:53:02.163158  465898 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:53:04.258718  465898 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.095531457s)
	I0717 19:53:04.258753  465898 crio.go:469] duration metric: took 2.095647704s to extract the tarball
	I0717 19:53:04.258760  465898 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:53:04.297137  465898 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:53:04.347484  465898 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:53:04.347515  465898 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:53:04.347527  465898 kubeadm.go:934] updating node { 192.168.72.104 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:53:04.347703  465898 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-500710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:53:04.347840  465898 ssh_runner.go:195] Run: crio config
	I0717 19:53:04.405395  465898 cni.go:84] Creating CNI manager for ""
	I0717 19:53:04.405416  465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:53:04.405426  465898 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0717 19:53:04.405456  465898 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-500710 NodeName:newest-cni-500710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:53:04.405637  465898 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-500710"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:53:04.405717  465898 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:53:04.416292  465898 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:53:04.416382  465898 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:53:04.427433  465898 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0717 19:53:04.445729  465898 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:53:04.463077  465898 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0717 19:53:04.480892  465898 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0717 19:53:04.484946  465898 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:53:04.498690  465898 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:53:04.638586  465898 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:53:04.656982  465898 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710 for IP: 192.168.72.104
	I0717 19:53:04.657011  465898 certs.go:194] generating shared ca certs ...
	I0717 19:53:04.657038  465898 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.657256  465898 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:53:04.657320  465898 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:53:04.657334  465898 certs.go:256] generating profile certs ...
	I0717 19:53:04.657410  465898 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key
	I0717 19:53:04.657441  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt with IP's: []
	I0717 19:53:04.802854  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt ...
	I0717 19:53:04.802892  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.crt: {Name:mkbdc92807370e837be9fde73dc8b8e0802b90f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.803105  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key ...
	I0717 19:53:04.803122  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/client.key: {Name:mk116d2e8b66bda777a94ad74c0c061d9e613c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.803243  465898 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261
	I0717 19:53:04.803267  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.104]
	I0717 19:53:04.894397  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 ...
	I0717 19:53:04.894427  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261: {Name:mk7dcae462b907b4660fd05a42e1f25ce611240d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.894589  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261 ...
	I0717 19:53:04.894602  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261: {Name:mk3d09d7d7cabc56058b70a05ca2d9fbeaa09b21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:04.894680  465898 certs.go:381] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt.c59b9261 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt
	I0717 19:53:04.894789  465898 certs.go:385] copying /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key.c59b9261 -> /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key
	I0717 19:53:04.894855  465898 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key
	I0717 19:53:04.894873  465898 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt with IP's: []
	I0717 19:53:05.040563  465898 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt ...
	I0717 19:53:05.040607  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt: {Name:mkd4b4e27352e6c439affe243713209d4973dc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:05.040847  465898 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key ...
	I0717 19:53:05.040872  465898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key: {Name:mk4fdf3be97ac5b32e32636ab03155d9176e2950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:53:05.041078  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:53:05.041115  465898 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:53:05.041122  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:53:05.041153  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:53:05.041174  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:53:05.041195  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:53:05.041229  465898 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:53:05.041996  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:53:05.069631  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:53:05.095458  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:53:05.119956  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:53:05.144412  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:53:05.168926  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:53:05.194999  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:53:05.219901  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/newest-cni-500710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:53:05.246572  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:53:05.273077  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:53:05.298253  465898 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:53:05.323163  465898 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:53:05.340280  465898 ssh_runner.go:195] Run: openssl version
	I0717 19:53:05.346165  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:53:05.356700  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.361276  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.361321  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:53:05.367234  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:53:05.378121  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:53:05.389168  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.394077  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.394142  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:53:05.399947  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:53:05.411101  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:53:05.422899  465898 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.428001  465898 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.428069  465898 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:53:05.434134  465898 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:53:05.448695  465898 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:53:05.453265  465898 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:53:05.453330  465898 kubeadm.go:392] StartCluster: {Name:newest-cni-500710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-500710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:53:05.453418  465898 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:53:05.453481  465898 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:53:05.521089  465898 cri.go:89] found id: ""
	I0717 19:53:05.521174  465898 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:53:05.533298  465898 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:53:05.545777  465898 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:53:05.558012  465898 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:53:05.558044  465898 kubeadm.go:157] found existing configuration files:
	
	I0717 19:53:05.558103  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:53:05.568744  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:53:05.568824  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:53:05.580464  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:53:05.590280  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:53:05.590344  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:53:05.601812  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:53:05.611595  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:53:05.611655  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:53:05.623287  465898 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:53:05.633221  465898 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:53:05.633288  465898 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:53:05.644991  465898 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:53:05.758644  465898 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 19:53:05.758757  465898 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:53:05.881869  465898 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:53:05.882091  465898 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:53:05.882257  465898 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 19:53:05.900635  465898 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:53:06.185968  465898 out.go:204]   - Generating certificates and keys ...
	I0717 19:53:06.186135  465898 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:53:06.186216  465898 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:53:06.186298  465898 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:53:06.232108  465898 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:53:06.503680  465898 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:53:06.611201  465898 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:53:06.770058  465898 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:53:06.770203  465898 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-500710] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0717 19:53:07.048582  465898 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:53:07.048824  465898 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-500710] and IPs [192.168.72.104 127.0.0.1 ::1]
	I0717 19:53:07.265137  465898 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:53:07.474647  465898 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:53:07.785708  465898 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:53:07.785798  465898 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:53:07.922554  465898 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:53:08.134410  465898 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:53:08.298182  465898 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:53:08.486413  465898 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:53:08.663674  465898 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:53:08.664238  465898 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:53:08.667348  465898 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:53:08.669554  465898 out.go:204]   - Booting up control plane ...
	I0717 19:53:08.669724  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:53:08.669836  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:53:08.670927  465898 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:53:08.689791  465898 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:53:08.696429  465898 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:53:08.696502  465898 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:53:08.845868  465898 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:53:08.846020  465898 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:53:09.345273  465898 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.562429ms
	I0717 19:53:09.345351  465898 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:53:14.843636  465898 kubeadm.go:310] [api-check] The API server is healthy after 5.501811667s
	I0717 19:53:14.860010  465898 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:53:14.880869  465898 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:53:14.916432  465898 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:53:14.916702  465898 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-500710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:53:14.930924  465898 kubeadm.go:310] [bootstrap-token] Using token: k00isy.dnwelxujxlldt6m5
	I0717 19:53:14.932244  465898 out.go:204]   - Configuring RBAC rules ...
	I0717 19:53:14.932428  465898 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:53:14.939628  465898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:53:14.947583  465898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:53:14.952031  465898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:53:14.956150  465898 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:53:14.960246  465898 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:53:15.252494  465898 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:53:15.676445  465898 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:53:16.251855  465898 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:53:16.251882  465898 kubeadm.go:310] 
	I0717 19:53:16.251955  465898 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:53:16.251967  465898 kubeadm.go:310] 
	I0717 19:53:16.252061  465898 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:53:16.252072  465898 kubeadm.go:310] 
	I0717 19:53:16.252103  465898 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:53:16.252184  465898 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:53:16.252257  465898 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:53:16.252269  465898 kubeadm.go:310] 
	I0717 19:53:16.252357  465898 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:53:16.252372  465898 kubeadm.go:310] 
	I0717 19:53:16.252433  465898 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:53:16.252443  465898 kubeadm.go:310] 
	I0717 19:53:16.252536  465898 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:53:16.252604  465898 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:53:16.252707  465898 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:53:16.252720  465898 kubeadm.go:310] 
	I0717 19:53:16.252796  465898 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:53:16.252863  465898 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:53:16.252869  465898 kubeadm.go:310] 
	I0717 19:53:16.252956  465898 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k00isy.dnwelxujxlldt6m5 \
	I0717 19:53:16.253105  465898 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:53:16.253131  465898 kubeadm.go:310] 	--control-plane 
	I0717 19:53:16.253138  465898 kubeadm.go:310] 
	I0717 19:53:16.253254  465898 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:53:16.253266  465898 kubeadm.go:310] 
	I0717 19:53:16.253430  465898 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k00isy.dnwelxujxlldt6m5 \
	I0717 19:53:16.253616  465898 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:53:16.254850  465898 kubeadm.go:310] W0717 19:53:05.738768     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 19:53:16.255195  465898 kubeadm.go:310] W0717 19:53:05.739646     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 19:53:16.255306  465898 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:53:16.255342  465898 cni.go:84] Creating CNI manager for ""
	I0717 19:53:16.255354  465898 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:53:16.257262  465898 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.689374156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245999689349860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5d5528d-8e42-4ef9-b9c2-766459ef3f4a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.690184817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5f3acdc-0b8c-4353-aaa4-9114910b17dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.690239357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5f3acdc-0b8c-4353-aaa4-9114910b17dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.690422466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5f3acdc-0b8c-4353-aaa4-9114910b17dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.730445575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc3238e0-ba74-49f6-b364-65a60b0a781b name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.730519509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc3238e0-ba74-49f6-b364-65a60b0a781b name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.731649128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1d3d761-bf53-48fb-90a4-ab88c9f72d78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.732027532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245999732008861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1d3d761-bf53-48fb-90a4-ab88c9f72d78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.732743237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07927fbd-7260-453b-82ba-52dd493ea1e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.732796405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07927fbd-7260-453b-82ba-52dd493ea1e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.732985544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07927fbd-7260-453b-82ba-52dd493ea1e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.771930748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29503c48-98c4-412c-9a3b-ce64243b3663 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.772113637Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29503c48-98c4-412c-9a3b-ce64243b3663 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.773270771Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58fbb891-1bec-47f8-a22f-f97eb9a5754b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.773842393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245999773816738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58fbb891-1bec-47f8-a22f-f97eb9a5754b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.774648766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15712153-8a9f-46f1-b033-715a15e8d2f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.774701969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15712153-8a9f-46f1-b033-715a15e8d2f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.774872477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15712153-8a9f-46f1-b033-715a15e8d2f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.811336063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7732462-029b-4e76-ab49-96b7d01d610e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.811436956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7732462-029b-4e76-ab49-96b7d01d610e name=/runtime.v1.RuntimeService/Version
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.812821216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55a3c1fa-2dd9-47bd-871d-73c0d718d96a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.813581956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245999813550472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55a3c1fa-2dd9-47bd-871d-73c0d718d96a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.814252426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=640ba45e-dc46-47fe-9185-87e3d58401d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.814307414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=640ba45e-dc46-47fe-9185-87e3d58401d6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:53:19 embed-certs-637675 crio[728]: time="2024-07-17 19:53:19.814495817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c,PodSandboxId:728b051abda92b9142c884ee532f4ac287339ee45160c63a0f4cac6e55e60d07,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721245154483205900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11a18e44-b523-46b2-a890-dd693460e032,},Annotations:map[string]string{io.kubernetes.container.hash: 46490f3f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc,PodSandboxId:7fdb130b2f33b50b1d2677d8b84782c31011f61607063b773d5f5fb49e5f0fb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153052073256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-45xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c936942-55bb-44c9-b446-365ec316c390,},Annotations:map[string]string{io.kubernetes.container.hash: c0d9cec2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a,PodSandboxId:3d0f83a962a14e94ea404c00c086f11b0dec6f9f7eb514c4ca5c1a8ef678b478,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721245153000966515,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nw8g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
313a484-73be-49e2-a483-b15f47abc24a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a2d088d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c,PodSandboxId:5c2d964094f6fe725bd7c6bc81feac321de171f448d7299e3f498af7c9ee39ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt
:1721245152187508378,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dns5j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d248751-6ee4-460d-b608-be6586613e3d,},Annotations:map[string]string{io.kubernetes.container.hash: 8ed485c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75,PodSandboxId:1cb88fe353ad5b5c586bd71accdf93507b452264d40711670f17b0584a7078a1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721245132198308487,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0bdc9cd649de90bf6dc1987724b6b0b,},Annotations:map[string]string{io.kubernetes.container.hash: cbb32c79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169,PodSandboxId:aa8ba3819e3b6cfa4f19d2aa291f204e55819547354018e043360d3829364e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721245132244960618,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82075de03dc9bcae774d7465efdadcda,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de,PodSandboxId:dd4bfd6e5cf1b72618802ffa717fd218e540f8ff74fd537bfaa3510235e629b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721245132139562532,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3b98fa702f0c3bb49b21f790be6e03f,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d,PodSandboxId:6d283b689c2cde8cd4919bc671d01ae6593d1743c5501845fbdd2a5a3b0c4046,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721245132171679539,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-637675,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977352cb4399365844bbb5e38359809c,},Annotations:map[string]string{io.kubernetes.container.hash: fd8a4af2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=640ba45e-dc46-47fe-9185-87e3d58401d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	48e5a7e0f2ab7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   728b051abda92       storage-provisioner
	24336c4ef3828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   7fdb130b2f33b       coredns-7db6d8ff4d-45xn7
	b1c225d15e2f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   3d0f83a962a14       coredns-7db6d8ff4d-nw8g8
	4b02cd67a4200       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   14 minutes ago      Running             kube-proxy                0                   5c2d964094f6f       kube-proxy-dns5j
	0045a361a96bb       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   14 minutes ago      Running             kube-scheduler            2                   aa8ba3819e3b6       kube-scheduler-embed-certs-637675
	b541216eac8f9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   1cb88fe353ad5       etcd-embed-certs-637675
	4087023e0c078       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   14 minutes ago      Running             kube-apiserver            2                   6d283b689c2cd       kube-apiserver-embed-certs-637675
	e719935cefb56       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   14 minutes ago      Running             kube-controller-manager   2                   dd4bfd6e5cf1b       kube-controller-manager-embed-certs-637675
	
	
	==> coredns [24336c4ef38287d9898eede33a346456e43912d0645a47e1ad017f588c33f5fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b1c225d15e2f6e8d567d624062f936369e4e42076ff901dc80241a0d8f2b237a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-637675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-637675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6
	                    minikube.k8s.io/name=embed-certs-637675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 19:38:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-637675
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:53:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:49:30 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:49:30 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:49:30 +0000   Wed, 17 Jul 2024 19:38:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:49:30 +0000   Wed, 17 Jul 2024 19:38:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    embed-certs-637675
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc27d91064f433ea9e3ab8310569cdd
	  System UUID:                fbc27d91-064f-433e-a9e3-ab8310569cdd
	  Boot ID:                    460442a8-053d-4618-a237-37e320ba92e3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-45xn7                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-nw8g8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-637675                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-637675             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-637675    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-dns5j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-637675             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-jf42d               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node embed-certs-637675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node embed-certs-637675 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node embed-certs-637675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node embed-certs-637675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node embed-certs-637675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-637675 event: Registered Node embed-certs-637675 in Controller
	
	
	==> dmesg <==
	[  +0.045243] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.003931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.421170] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.603500] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.236915] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.066877] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057944] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.196685] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.125991] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.340304] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.283972] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.061966] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.899535] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[Jul17 19:34] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.299545] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.636813] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 19:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.636497] systemd-fstab-generator[3587]: Ignoring "noauto" option for root device
	[  +4.587257] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.477763] systemd-fstab-generator[3911]: Ignoring "noauto" option for root device
	[Jul17 19:39] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.404918] systemd-fstab-generator[4236]: Ignoring "noauto" option for root device
	[Jul17 19:40] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [b541216eac8f924abf4b5b51a1910e7214f379861a78e6e31b3bc276ecfeee75] <==
	{"level":"info","ts":"2024-07-17T19:38:52.864072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.864097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-07-17T19:38:52.868884Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:embed-certs-637675 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T19:38:52.868992Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:52.869386Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.870733Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T19:38:52.871026Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:52.871057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T19:38:52.872502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-07-17T19:38:52.875256Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.875346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:38:52.880665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T19:38:52.880973Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T19:48:53.276364Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":711}
	{"level":"info","ts":"2024-07-17T19:48:53.284855Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":711,"took":"8.08642ms","hash":211349990,"current-db-size-bytes":2224128,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2224128,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-17T19:48:53.285032Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":211349990,"revision":711,"compact-revision":-1}
	{"level":"warn","ts":"2024-07-17T19:53:07.009947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.532206ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4876431660312635212 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.140\" mod_revision:1152 > success:<request_put:<key:\"/registry/masterleases/192.168.39.140\" value_size:67 lease:4876431660312635210 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.140\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:53:07.010461Z","caller":"traceutil/trace.go:171","msg":"trace[2004508705] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"264.428794ms","start":"2024-07-17T19:53:06.745985Z","end":"2024-07-17T19:53:07.010414Z","steps":["trace[2004508705] 'process raft request'  (duration: 139.334342ms)","trace[2004508705] 'compare'  (duration: 123.365272ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:53:07.254095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.871196ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4876431660312635217 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1159 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:53:07.254497Z","caller":"traceutil/trace.go:171","msg":"trace[218525138] linearizableReadLoop","detail":"{readStateIndex:1348; appliedIndex:1347; }","duration":"236.596778ms","start":"2024-07-17T19:53:07.017884Z","end":"2024-07-17T19:53:07.254481Z","steps":["trace[218525138] 'read index received'  (duration: 117.226344ms)","trace[218525138] 'applied index is now lower than readState.Index'  (duration: 119.368632ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:53:07.254726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.816723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2024-07-17T19:53:07.255072Z","caller":"traceutil/trace.go:171","msg":"trace[2125141903] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"238.994357ms","start":"2024-07-17T19:53:07.016052Z","end":"2024-07-17T19:53:07.255047Z","steps":["trace[2125141903] 'process raft request'  (duration: 119.115271ms)","trace[2125141903] 'compare'  (duration: 118.60822ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T19:53:07.25613Z","caller":"traceutil/trace.go:171","msg":"trace[1883487500] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1161; }","duration":"237.646111ms","start":"2024-07-17T19:53:07.017862Z","end":"2024-07-17T19:53:07.255509Z","steps":["trace[1883487500] 'agreement among raft nodes before linearized reading'  (duration: 236.770279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T19:53:07.509767Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.157103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:53:07.509834Z","caller":"traceutil/trace.go:171","msg":"trace[1927905257] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1161; }","duration":"120.26632ms","start":"2024-07-17T19:53:07.389555Z","end":"2024-07-17T19:53:07.509821Z","steps":["trace[1927905257] 'range keys from in-memory index tree'  (duration: 120.05596ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:53:20 up 19 min,  0 users,  load average: 0.34, 0.18, 0.12
	Linux embed-certs-637675 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4087023e0c078fbd5ef52104ec1f2a7cf1111f7bd25f6810947564b65358d50d] <==
	I0717 19:46:55.893045       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:48:54.895120       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:48:54.895292       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:48:55.895785       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:48:55.895846       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:48:55.895858       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:48:55.895812       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:48:55.896079       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:48:55.897363       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:49:55.896420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:49:55.896569       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:49:55.896633       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:49:55.897544       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:49:55.897750       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:49:55.897783       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:51:55.897034       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:51:55.897318       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:51:55.897347       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:51:55.898644       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:51:55.898767       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:51:55.898796       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e719935cefb567b7356e58bd6783794df83c6e26e2f72360c06434dc4dcc23de] <==
	I0717 19:47:41.701541       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:48:11.219386       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:48:11.710315       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:48:41.224474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:48:41.720880       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:49:11.230338       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:49:11.730171       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:49:41.235521       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:49:41.739715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:50:11.241224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:50:11.749034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:50:24.360032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="172.058µs"
	I0717 19:50:39.360753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="218.269µs"
	E0717 19:50:41.246888       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:50:41.757268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:11.252988       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:51:11.764755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:51:41.258003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:51:41.772139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:11.264150       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:52:11.780146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:52:41.269935       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:52:41.790199       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:53:11.277445       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:53:11.800109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b02cd67a42005dbf4a7cbd84fc14738b9a4c3453252f1e201e8a3bf15f6a70c] <==
	I0717 19:39:12.493966       1 server_linux.go:69] "Using iptables proxy"
	I0717 19:39:12.517203       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0717 19:39:12.601582       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 19:39:12.601676       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 19:39:12.601691       1 server_linux.go:165] "Using iptables Proxier"
	I0717 19:39:12.612778       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:39:12.613005       1 server.go:872] "Version info" version="v1.30.2"
	I0717 19:39:12.613033       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:39:12.615127       1 config.go:192] "Starting service config controller"
	I0717 19:39:12.615169       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 19:39:12.615199       1 config.go:101] "Starting endpoint slice config controller"
	I0717 19:39:12.615203       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 19:39:12.615933       1 config.go:319] "Starting node config controller"
	I0717 19:39:12.615961       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 19:39:12.715442       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 19:39:12.715504       1 shared_informer.go:320] Caches are synced for service config
	I0717 19:39:12.716010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0045a361a96bb3c286b58485d5377da51626c6188cb1bb36842915bf26ac7169] <==
	W0717 19:38:54.947851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 19:38:54.947880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 19:38:54.947996       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:54.948157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 19:38:54.948531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:54.948567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:54.948655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:38:54.948690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:38:54.950214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 19:38:54.950328       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:38:54.950368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:54.950798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 19:38:54.950916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:54.951531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:55.802150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:55.802347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:38:55.821626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:38:55.821764       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:38:55.834315       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:38:55.834363       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:38:55.975562       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 19:38:55.975666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 19:38:56.104982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:38:56.105082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 19:38:57.638502       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:50:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:50:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:50:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:50:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:51:01 embed-certs-637675 kubelet[3918]: E0717 19:51:01.345950    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:51:16 embed-certs-637675 kubelet[3918]: E0717 19:51:16.344730    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:51:29 embed-certs-637675 kubelet[3918]: E0717 19:51:29.344995    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:51:41 embed-certs-637675 kubelet[3918]: E0717 19:51:41.345892    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:51:53 embed-certs-637675 kubelet[3918]: E0717 19:51:53.344685    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:51:57 embed-certs-637675 kubelet[3918]: E0717 19:51:57.372330    3918 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:51:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:51:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:51:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:51:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:52:07 embed-certs-637675 kubelet[3918]: E0717 19:52:07.345119    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:52:19 embed-certs-637675 kubelet[3918]: E0717 19:52:19.347096    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:52:34 embed-certs-637675 kubelet[3918]: E0717 19:52:34.344942    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:52:47 embed-certs-637675 kubelet[3918]: E0717 19:52:47.346136    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:52:57 embed-certs-637675 kubelet[3918]: E0717 19:52:57.372705    3918 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:52:57 embed-certs-637675 kubelet[3918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:52:57 embed-certs-637675 kubelet[3918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:52:57 embed-certs-637675 kubelet[3918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:52:57 embed-certs-637675 kubelet[3918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:53:02 embed-certs-637675 kubelet[3918]: E0717 19:53:02.344463    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	Jul 17 19:53:15 embed-certs-637675 kubelet[3918]: E0717 19:53:15.345902    3918 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jf42d" podUID="c92dbb96-5721-4ff9-a428-9215223d2b83"
	
	
	==> storage-provisioner [48e5a7e0f2ab78ae01bb2cd94dc7f9263c45ae6f2c395ddf07a0345de994354c] <==
	I0717 19:39:14.584671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 19:39:14.601108       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 19:39:14.601159       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 19:39:14.610567       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 19:39:14.611551       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e!
	I0717 19:39:14.611485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df6d8ef0-41fb-440c-add2-488fbe8a8536", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e became leader
	I0717 19:39:14.712777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-637675_92b328b8-f4d0-4f3b-85d8-718bbeb8a15e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-637675 -n embed-certs-637675
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-637675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jf42d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d: exit status 1 (86.265165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jf42d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-637675 describe pod metrics-server-569cc877fc-jf42d: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (299.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (101.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:51:02.803987  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:51:35.250997  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:51:56.647811  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/kindnet-369638/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
E0717 19:52:13.090709  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.208:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.208:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (240.421428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-998147" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-998147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-998147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.095µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-998147 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (226.983928ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-998147 logs -n 25: (1.683056457s)
E0717 19:52:32.425438  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-369638 sudo cat                              | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo                                  | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo find                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-369638 sudo crio                             | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-369638                                       | bridge-369638                | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	| delete  | -p                                                     | disable-driver-mounts-728347 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:24 UTC |
	|         | disable-driver-mounts-728347                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:24 UTC | 17 Jul 24 19:25 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-637675            | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-713715             | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC | 17 Jul 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-713715                                   | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:25 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-378944  | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC | 17 Jul 24 19:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:26 UTC |                     |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-998147        | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-637675                 | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-713715                  | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-637675                                  | embed-certs-637675           | jenkins | v1.33.1 | 17 Jul 24 19:27 UTC | 17 Jul 24 19:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-713715 --memory=2200                     | no-preload-713715            | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:37 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-378944       | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-378944 | jenkins | v1.33.1 | 17 Jul 24 19:28 UTC | 17 Jul 24 19:38 UTC |
	|         | default-k8s-diff-port-378944                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-998147             | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC | 17 Jul 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-998147                              | old-k8s-version-998147       | jenkins | v1.33.1 | 17 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:29:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:29:11.500453  459741 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:29:11.500622  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500633  459741 out.go:304] Setting ErrFile to fd 2...
	I0717 19:29:11.500639  459741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:29:11.500842  459741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:29:11.501399  459741 out.go:298] Setting JSON to false
	I0717 19:29:11.502411  459741 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11494,"bootTime":1721233057,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:29:11.502474  459741 start.go:139] virtualization: kvm guest
	I0717 19:29:11.504961  459741 out.go:177] * [old-k8s-version-998147] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:29:11.506551  459741 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:29:11.506614  459741 notify.go:220] Checking for updates...
	I0717 19:29:11.509388  459741 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:29:11.511209  459741 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:29:11.512669  459741 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:29:11.514164  459741 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:29:11.515499  459741 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:29:11.517240  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:29:11.517702  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.517772  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.533954  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42501
	I0717 19:29:11.534390  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.534975  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.535003  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.535362  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.535550  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.537723  459741 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 19:29:11.539119  459741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:29:11.539416  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:29:11.539452  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:29:11.554412  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32849
	I0717 19:29:11.554815  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:29:11.555296  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:29:11.555317  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:29:11.555633  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:29:11.555830  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:29:11.590907  459741 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:29:11.592089  459741 start.go:297] selected driver: kvm2
	I0717 19:29:11.592110  459741 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.592224  459741 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:29:11.592942  459741 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.593047  459741 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:29:11.607578  459741 install.go:137] /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:29:11.607960  459741 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:29:11.608027  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:29:11.608045  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:29:11.608102  459741 start.go:340] cluster config:
	{Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:29:11.608223  459741 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:29:11.609956  459741 out.go:177] * Starting "old-k8s-version-998147" primary control-plane node in "old-k8s-version-998147" cluster
	I0717 19:29:15.576809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:11.611130  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:29:11.611167  459741 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:29:11.611178  459741 cache.go:56] Caching tarball of preloaded images
	I0717 19:29:11.611285  459741 preload.go:172] Found /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:29:11.611302  459741 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 19:29:11.611414  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:29:11.611598  459741 start.go:360] acquireMachinesLock for old-k8s-version-998147: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:29:18.648779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:24.728819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:27.800821  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:33.880750  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:36.952809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:43.032777  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:46.104785  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:52.184787  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:29:55.260741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:01.336761  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:04.408863  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:10.488814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:13.560771  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:19.640809  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:22.712791  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:28.792742  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:31.864819  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:37.944814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:41.016844  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:47.096765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:50.168766  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:56.248814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:30:59.320805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:05.400752  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:08.472800  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:14.552805  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:17.624781  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:23.704775  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:26.776769  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:32.856798  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:35.928859  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:42.008795  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:45.080741  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:51.160806  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:31:54.232765  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:00.312835  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:03.384814  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:09.464779  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:12.536704  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:18.616758  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:21.688749  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:27.768726  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:30.840760  459061 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.140:22: connect: no route to host
	I0717 19:32:33.845161  459147 start.go:364] duration metric: took 4m31.30170624s to acquireMachinesLock for "no-preload-713715"
	I0717 19:32:33.845231  459147 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:33.845239  459147 fix.go:54] fixHost starting: 
	I0717 19:32:33.845641  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:33.845672  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:33.861218  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I0717 19:32:33.861739  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:33.862269  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:32:33.862294  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:33.862688  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:33.862906  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:33.863078  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:32:33.864713  459147 fix.go:112] recreateIfNeeded on no-preload-713715: state=Stopped err=<nil>
	I0717 19:32:33.864747  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	W0717 19:32:33.864918  459147 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:33.866791  459147 out.go:177] * Restarting existing kvm2 VM for "no-preload-713715" ...
	I0717 19:32:33.842533  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:33.842571  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.842991  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:32:33.843030  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:32:33.843258  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:32:33.844991  459061 machine.go:97] duration metric: took 4m37.424855793s to provisionDockerMachine
	I0717 19:32:33.845049  459061 fix.go:56] duration metric: took 4m37.444711115s for fixHost
	I0717 19:32:33.845058  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 4m37.444736968s
	W0717 19:32:33.845085  459061 start.go:714] error starting host: provision: host is not running
	W0717 19:32:33.845226  459061 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 19:32:33.845240  459061 start.go:729] Will try again in 5 seconds ...
	I0717 19:32:33.868034  459147 main.go:141] libmachine: (no-preload-713715) Calling .Start
	I0717 19:32:33.868203  459147 main.go:141] libmachine: (no-preload-713715) Ensuring networks are active...
	I0717 19:32:33.868998  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network default is active
	I0717 19:32:33.869310  459147 main.go:141] libmachine: (no-preload-713715) Ensuring network mk-no-preload-713715 is active
	I0717 19:32:33.869667  459147 main.go:141] libmachine: (no-preload-713715) Getting domain xml...
	I0717 19:32:33.870300  459147 main.go:141] libmachine: (no-preload-713715) Creating domain...
	I0717 19:32:35.077699  459147 main.go:141] libmachine: (no-preload-713715) Waiting to get IP...
	I0717 19:32:35.078453  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.078991  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.079061  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.078942  460425 retry.go:31] will retry after 213.705648ms: waiting for machine to come up
	I0717 19:32:35.294580  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.294987  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.295015  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.294949  460425 retry.go:31] will retry after 341.137055ms: waiting for machine to come up
	I0717 19:32:35.637531  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:35.637894  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:35.637922  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:35.637842  460425 retry.go:31] will retry after 479.10915ms: waiting for machine to come up
	I0717 19:32:36.118434  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.118887  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.118918  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.118837  460425 retry.go:31] will retry after 404.249247ms: waiting for machine to come up
	I0717 19:32:36.524442  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:36.524847  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:36.524880  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:36.524812  460425 retry.go:31] will retry after 737.708741ms: waiting for machine to come up
	I0717 19:32:37.263864  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:37.264365  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:37.264393  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:37.264241  460425 retry.go:31] will retry after 793.874529ms: waiting for machine to come up
	I0717 19:32:38.846990  459061 start.go:360] acquireMachinesLock for embed-certs-637675: {Name:mke9f5964d3678e22f96aac00347ee7351098bbc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:32:38.059206  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.059645  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.059671  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.059592  460425 retry.go:31] will retry after 831.952935ms: waiting for machine to come up
	I0717 19:32:38.893113  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:38.893595  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:38.893623  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:38.893496  460425 retry.go:31] will retry after 955.463175ms: waiting for machine to come up
	I0717 19:32:39.850681  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:39.851111  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:39.851146  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:39.851045  460425 retry.go:31] will retry after 1.513026699s: waiting for machine to come up
	I0717 19:32:41.365899  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:41.366497  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:41.366528  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:41.366435  460425 retry.go:31] will retry after 1.503398124s: waiting for machine to come up
	I0717 19:32:42.872396  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:42.872932  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:42.872961  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:42.872904  460425 retry.go:31] will retry after 2.818722445s: waiting for machine to come up
	I0717 19:32:45.692847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:45.693240  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:45.693270  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:45.693168  460425 retry.go:31] will retry after 2.647833654s: waiting for machine to come up
	I0717 19:32:48.344167  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:48.344671  459147 main.go:141] libmachine: (no-preload-713715) DBG | unable to find current IP address of domain no-preload-713715 in network mk-no-preload-713715
	I0717 19:32:48.344711  459147 main.go:141] libmachine: (no-preload-713715) DBG | I0717 19:32:48.344593  460425 retry.go:31] will retry after 3.625317785s: waiting for machine to come up
	I0717 19:32:51.973297  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.973853  459147 main.go:141] libmachine: (no-preload-713715) Found IP for machine: 192.168.61.66
	I0717 19:32:51.973882  459147 main.go:141] libmachine: (no-preload-713715) Reserving static IP address...
	I0717 19:32:51.973897  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has current primary IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.974288  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.974314  459147 main.go:141] libmachine: (no-preload-713715) DBG | skip adding static IP to network mk-no-preload-713715 - found existing host DHCP lease matching {name: "no-preload-713715", mac: "52:54:00:9e:fc:38", ip: "192.168.61.66"}
	I0717 19:32:51.974324  459147 main.go:141] libmachine: (no-preload-713715) Reserved static IP address: 192.168.61.66
	I0717 19:32:51.974334  459147 main.go:141] libmachine: (no-preload-713715) Waiting for SSH to be available...
	I0717 19:32:51.974342  459147 main.go:141] libmachine: (no-preload-713715) DBG | Getting to WaitForSSH function...
	I0717 19:32:51.976322  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976760  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:51.976804  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:51.976918  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH client type: external
	I0717 19:32:51.976956  459147 main.go:141] libmachine: (no-preload-713715) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa (-rw-------)
	I0717 19:32:51.976993  459147 main.go:141] libmachine: (no-preload-713715) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:32:51.977004  459147 main.go:141] libmachine: (no-preload-713715) DBG | About to run SSH command:
	I0717 19:32:51.977013  459147 main.go:141] libmachine: (no-preload-713715) DBG | exit 0
	I0717 19:32:52.100405  459147 main.go:141] libmachine: (no-preload-713715) DBG | SSH cmd err, output: <nil>: 
	I0717 19:32:52.100914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetConfigRaw
	I0717 19:32:52.101578  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.103993  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104431  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.104461  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.104779  459147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/config.json ...
	I0717 19:32:52.104987  459147 machine.go:94] provisionDockerMachine start ...
	I0717 19:32:52.105006  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:52.105234  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.107642  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108002  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.108027  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.108132  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.108311  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108472  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.108628  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.108804  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.109027  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.109037  459147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:32:52.216916  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:32:52.216949  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217209  459147 buildroot.go:166] provisioning hostname "no-preload-713715"
	I0717 19:32:52.217238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.217427  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.220152  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220434  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.220472  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.220716  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.220923  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221117  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.221230  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.221386  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.221575  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.221592  459147 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-713715 && echo "no-preload-713715" | sudo tee /etc/hostname
	I0717 19:32:52.343761  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-713715
	
	I0717 19:32:52.343802  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.347059  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347370  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.347400  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.347652  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.347883  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348182  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.348374  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.348625  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.348820  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.348836  459147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-713715' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-713715/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-713715' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:32:53.313707  459447 start.go:364] duration metric: took 4m16.715852426s to acquireMachinesLock for "default-k8s-diff-port-378944"
	I0717 19:32:53.313783  459447 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:32:53.313790  459447 fix.go:54] fixHost starting: 
	I0717 19:32:53.314243  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:32:53.314285  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:32:53.330763  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0717 19:32:53.331159  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:32:53.331660  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:32:53.331686  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:32:53.332089  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:32:53.332319  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:32:53.332479  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:32:53.334126  459447 fix.go:112] recreateIfNeeded on default-k8s-diff-port-378944: state=Stopped err=<nil>
	I0717 19:32:53.334172  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	W0717 19:32:53.334327  459447 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:32:53.336801  459447 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-378944" ...
	I0717 19:32:52.462144  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:32:52.462179  459147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:32:52.462197  459147 buildroot.go:174] setting up certificates
	I0717 19:32:52.462210  459147 provision.go:84] configureAuth start
	I0717 19:32:52.462224  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetMachineName
	I0717 19:32:52.462579  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:52.465348  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.465889  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.465919  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.466069  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.468522  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.468914  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.468950  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.469041  459147 provision.go:143] copyHostCerts
	I0717 19:32:52.469126  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:32:52.469146  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:32:52.469234  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:32:52.469357  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:32:52.469367  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:32:52.469408  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:32:52.469492  459147 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:32:52.469501  459147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:32:52.469535  459147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:32:52.469621  459147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.no-preload-713715 san=[127.0.0.1 192.168.61.66 localhost minikube no-preload-713715]
	I0717 19:32:52.650963  459147 provision.go:177] copyRemoteCerts
	I0717 19:32:52.651037  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:32:52.651075  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.654245  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654597  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.654616  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.654825  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.655055  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.655215  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.655411  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:52.739048  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:32:52.762566  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:32:52.785616  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:32:52.808881  459147 provision.go:87] duration metric: took 346.648771ms to configureAuth
	I0717 19:32:52.808922  459147 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:32:52.809145  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:32:52.809246  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:52.812111  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812423  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:52.812457  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:52.812686  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:52.812885  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813186  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:52.813346  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:52.813542  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:52.813778  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:52.813800  459147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:32:53.076607  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:32:53.076638  459147 machine.go:97] duration metric: took 971.636298ms to provisionDockerMachine
	I0717 19:32:53.076652  459147 start.go:293] postStartSetup for "no-preload-713715" (driver="kvm2")
	I0717 19:32:53.076685  459147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:32:53.076714  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.077033  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:32:53.077068  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.079605  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.079887  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.079911  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.080028  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.080217  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.080401  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.080593  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.163562  459147 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:32:53.167996  459147 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:32:53.168026  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:32:53.168111  459147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:32:53.168194  459147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:32:53.168304  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:32:53.178039  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:32:53.201841  459147 start.go:296] duration metric: took 125.171457ms for postStartSetup
	I0717 19:32:53.201908  459147 fix.go:56] duration metric: took 19.356669392s for fixHost
	I0717 19:32:53.201944  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.204438  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.204823  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.204847  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.205012  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.205195  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205352  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.205501  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.205632  459147 main.go:141] libmachine: Using SSH client type: native
	I0717 19:32:53.205807  459147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.66 22 <nil> <nil>}
	I0717 19:32:53.205818  459147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:32:53.313516  459147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244773.289121394
	
	I0717 19:32:53.313540  459147 fix.go:216] guest clock: 1721244773.289121394
	I0717 19:32:53.313547  459147 fix.go:229] Guest: 2024-07-17 19:32:53.289121394 +0000 UTC Remote: 2024-07-17 19:32:53.201923093 +0000 UTC m=+290.801143172 (delta=87.198301ms)
	I0717 19:32:53.313569  459147 fix.go:200] guest clock delta is within tolerance: 87.198301ms
	I0717 19:32:53.313595  459147 start.go:83] releasing machines lock for "no-preload-713715", held for 19.468370802s
	I0717 19:32:53.313630  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.313917  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:53.316881  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317256  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.317287  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.317443  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.317922  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318107  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:32:53.318182  459147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:32:53.318238  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.318358  459147 ssh_runner.go:195] Run: cat /version.json
	I0717 19:32:53.318384  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:32:53.321257  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321424  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321620  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321641  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321748  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:53.321772  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:53.321815  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322061  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322079  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:32:53.322282  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322280  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:32:53.322459  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:32:53.322464  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.322592  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:32:53.401861  459147 ssh_runner.go:195] Run: systemctl --version
	I0717 19:32:53.425378  459147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:32:53.567192  459147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:32:53.575354  459147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:32:53.575425  459147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:32:53.595781  459147 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:32:53.595818  459147 start.go:495] detecting cgroup driver to use...
	I0717 19:32:53.595955  459147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:32:53.611488  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:32:53.625548  459147 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:32:53.625612  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:32:53.639207  459147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:32:53.652721  459147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:32:53.772322  459147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:32:53.942009  459147 docker.go:233] disabling docker service ...
	I0717 19:32:53.942092  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:32:53.961729  459147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:32:53.974585  459147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:32:54.112406  459147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:32:54.245426  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:32:54.259855  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:32:54.278930  459147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:32:54.279008  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.289913  459147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:32:54.289992  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.300687  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.312480  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.324895  459147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:32:54.335879  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.347434  459147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.367882  459147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:32:54.379415  459147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:32:54.390488  459147 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:32:54.390554  459147 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:32:54.411855  459147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:32:54.423747  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:32:54.562086  459147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:32:54.707957  459147 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:32:54.708052  459147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:32:54.712631  459147 start.go:563] Will wait 60s for crictl version
	I0717 19:32:54.712693  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:54.716329  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:32:54.753525  459147 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:32:54.753634  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.782659  459147 ssh_runner.go:195] Run: crio --version
	I0717 19:32:54.813996  459147 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:32:53.338154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Start
	I0717 19:32:53.338327  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring networks are active...
	I0717 19:32:53.338965  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network default is active
	I0717 19:32:53.339348  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Ensuring network mk-default-k8s-diff-port-378944 is active
	I0717 19:32:53.339780  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Getting domain xml...
	I0717 19:32:53.340436  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Creating domain...
	I0717 19:32:54.632016  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting to get IP...
	I0717 19:32:54.632953  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633425  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.633541  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.633409  460568 retry.go:31] will retry after 191.141019ms: waiting for machine to come up
	I0717 19:32:54.825767  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826279  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:54.826311  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:54.826243  460568 retry.go:31] will retry after 334.738903ms: waiting for machine to come up
	I0717 19:32:55.162861  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163361  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.163394  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.163319  460568 retry.go:31] will retry after 446.719082ms: waiting for machine to come up
	I0717 19:32:55.611971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612359  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:55.612388  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:55.612297  460568 retry.go:31] will retry after 387.196239ms: waiting for machine to come up
	I0717 19:32:56.000969  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001385  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.001421  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.001323  460568 retry.go:31] will retry after 618.776991ms: waiting for machine to come up
	I0717 19:32:54.815249  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetIP
	I0717 19:32:54.818280  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818662  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:32:54.818694  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:32:54.818925  459147 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 19:32:54.823292  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:32:54.837168  459147 kubeadm.go:883] updating cluster {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:32:54.837345  459147 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:32:54.837394  459147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:32:54.875819  459147 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:32:54.875859  459147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:32:54.875946  459147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.875964  459147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.875987  459147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.876016  459147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.876030  459147 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 19:32:54.875991  459147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.875971  459147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.875949  459147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878011  459147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:54.878029  459147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:54.878033  459147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:54.878047  459147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:54.878078  459147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:54.878020  459147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:54.878021  459147 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 19:32:55.044905  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.065945  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.077752  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.100576  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 19:32:55.105038  459147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 19:32:55.105122  459147 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.105181  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.109323  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.138522  459147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 19:32:55.138582  459147 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.138652  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.166056  459147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 19:32:55.166116  459147 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.166172  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.225986  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.255114  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.291108  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 19:32:55.291133  459147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 19:32:55.291179  459147 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.291225  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.291238  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 19:32:55.291283  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 19:32:55.291287  459147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 19:32:55.291355  459147 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.291382  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.317030  459147 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 19:32:55.317075  459147 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.317122  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:55.372223  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.372291  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 19:32:55.372329  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.378465  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 19:32:55.378498  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378504  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 19:32:55.378584  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 19:32:55.378593  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:55.378589  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:55.443789  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.443799  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 19:32:55.443851  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443902  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 19:32:55.443914  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:32:55.451377  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451452  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 19:32:55.451487  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 19:32:55.451496  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:32:55.451535  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 19:32:55.451540  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:32:55.452022  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 19:32:55.848543  459147 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:56.622250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:56.622756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:56.622674  460568 retry.go:31] will retry after 591.25664ms: waiting for machine to come up
	I0717 19:32:57.215318  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215728  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:57.215760  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:57.215674  460568 retry.go:31] will retry after 1.178875952s: waiting for machine to come up
	I0717 19:32:58.396341  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396810  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:58.396840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:58.396757  460568 retry.go:31] will retry after 1.444090511s: waiting for machine to come up
	I0717 19:32:59.842294  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842722  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:32:59.842750  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:32:59.842683  460568 retry.go:31] will retry after 1.660894501s: waiting for machine to come up
	I0717 19:32:57.819031  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.367504857s)
	I0717 19:32:57.819080  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 19:32:57.819112  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.367550192s)
	I0717 19:32:57.819123  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 19:32:57.819196  459147 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970607417s)
	I0717 19:32:57.819211  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.375270996s)
	I0717 19:32:57.819232  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 19:32:57.819254  459147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:32:57.819260  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819291  459147 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:57.819322  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 19:32:57.819335  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:32:57.823619  459147 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:32:59.879412  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.060056699s)
	I0717 19:32:59.879448  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 19:32:59.879475  459147 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.055825616s)
	I0717 19:32:59.879539  459147 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:32:59.879480  459147 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:32:59.879645  459147 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:32:59.879762  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 19:33:01.862179  459147 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.982496804s)
	I0717 19:33:01.862232  459147 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 19:33:01.862284  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.982489567s)
	I0717 19:33:01.862311  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 19:33:01.862352  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.862439  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 19:33:01.505553  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505921  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:01.505949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:01.505876  460568 retry.go:31] will retry after 1.937668711s: waiting for machine to come up
	I0717 19:33:03.445356  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445903  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:03.445949  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:03.445839  460568 retry.go:31] will retry after 2.088910223s: waiting for machine to come up
	I0717 19:33:05.537212  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537609  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:05.537640  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:05.537527  460568 retry.go:31] will retry after 2.960616491s: waiting for machine to come up
	I0717 19:33:03.827643  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.965173972s)
	I0717 19:33:03.827677  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 19:33:03.827712  459147 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:03.827769  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 19:33:05.287464  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.459663322s)
	I0717 19:33:05.287509  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 19:33:05.287543  459147 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:05.287638  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 19:33:08.500028  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500625  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | unable to find current IP address of domain default-k8s-diff-port-378944 in network mk-default-k8s-diff-port-378944
	I0717 19:33:08.500667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | I0717 19:33:08.500568  460568 retry.go:31] will retry after 3.494426589s: waiting for machine to come up
	I0717 19:33:08.560006  459147 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.272339244s)
	I0717 19:33:08.560060  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 19:33:08.560099  459147 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:08.560169  459147 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:33:09.202632  459147 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:33:09.202684  459147 cache_images.go:123] Successfully loaded all cached images
	I0717 19:33:09.202692  459147 cache_images.go:92] duration metric: took 14.326812062s to LoadCachedImages
	I0717 19:33:09.202709  459147 kubeadm.go:934] updating node { 192.168.61.66 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:33:09.202917  459147 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-713715 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:09.203024  459147 ssh_runner.go:195] Run: crio config
	I0717 19:33:09.250281  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:09.250307  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:09.250319  459147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:09.250348  459147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.66 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-713715 NodeName:no-preload-713715 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.66"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.66 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:09.250507  459147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.66
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-713715"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.66
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.66"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:09.250572  459147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:33:09.260855  459147 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:09.260926  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:09.270148  459147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0717 19:33:09.287113  459147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:33:09.303147  459147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0717 19:33:09.319718  459147 ssh_runner.go:195] Run: grep 192.168.61.66	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:09.323343  459147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.66	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:09.335051  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:09.458012  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:09.476517  459147 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715 for IP: 192.168.61.66
	I0717 19:33:09.476548  459147 certs.go:194] generating shared ca certs ...
	I0717 19:33:09.476581  459147 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:09.476822  459147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:09.476888  459147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:09.476901  459147 certs.go:256] generating profile certs ...
	I0717 19:33:09.477093  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/client.key
	I0717 19:33:09.477157  459147 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key.833d71c5
	I0717 19:33:09.477198  459147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key
	I0717 19:33:09.477346  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:09.477380  459147 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:09.477390  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:09.477415  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:09.477436  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:09.477460  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:09.477496  459147 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:09.478210  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:09.523245  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:09.556326  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:09.592018  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:09.631190  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:33:09.663671  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:09.691062  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:09.715211  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/no-preload-713715/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:09.740818  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:09.766086  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:09.791739  459147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:09.817034  459147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:09.835074  459147 ssh_runner.go:195] Run: openssl version
	I0717 19:33:09.841297  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:09.853525  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.857984  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.858052  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:09.864308  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:09.875577  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:09.886977  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891840  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.891894  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:09.898044  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:09.910756  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:09.922945  459147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927708  459147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.927771  459147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:09.933774  459147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:09.945891  459147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:09.950743  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:09.956992  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:09.963228  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:09.969576  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:09.975912  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:09.982164  459147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:09.988308  459147 kubeadm.go:392] StartCluster: {Name:no-preload-713715 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-713715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:09.988412  459147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:09.988473  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.038048  459147 cri.go:89] found id: ""
	I0717 19:33:10.038123  459147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:10.050153  459147 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:10.050179  459147 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:10.050244  459147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:10.061413  459147 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:10.062384  459147 kubeconfig.go:125] found "no-preload-713715" server: "https://192.168.61.66:8443"
	I0717 19:33:10.064510  459147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:10.075459  459147 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.66
	I0717 19:33:10.075494  459147 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:10.075507  459147 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:10.075551  459147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:10.115024  459147 cri.go:89] found id: ""
	I0717 19:33:10.115093  459147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:10.135459  459147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:10.147000  459147 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:10.147027  459147 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:10.147074  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:10.158197  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:10.158267  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:10.168726  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:10.178115  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:10.178169  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:10.187888  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.197501  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:10.197564  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:10.208958  459147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:10.219818  459147 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:10.219889  459147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:10.230847  459147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:10.242115  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:10.352629  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.306147  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.508125  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.570418  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:11.632907  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:11.633012  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.133086  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:13.378581  459741 start.go:364] duration metric: took 4m1.766913597s to acquireMachinesLock for "old-k8s-version-998147"
	I0717 19:33:13.378661  459741 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:13.378670  459741 fix.go:54] fixHost starting: 
	I0717 19:33:13.379301  459741 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:13.379346  459741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:13.399824  459741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0717 19:33:13.400269  459741 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:13.400788  459741 main.go:141] libmachine: Using API Version  1
	I0717 19:33:13.400811  459741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:13.401179  459741 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:13.401339  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:13.401493  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetState
	I0717 19:33:13.403027  459741 fix.go:112] recreateIfNeeded on old-k8s-version-998147: state=Stopped err=<nil>
	I0717 19:33:13.403059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	W0717 19:33:13.403205  459741 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:13.405244  459741 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-998147" ...
	I0717 19:33:11.996171  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has current primary IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.996667  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Found IP for machine: 192.168.50.238
	I0717 19:33:11.996682  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserving static IP address...
	I0717 19:33:11.997157  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.997197  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | skip adding static IP to network mk-default-k8s-diff-port-378944 - found existing host DHCP lease matching {name: "default-k8s-diff-port-378944", mac: "52:54:00:45:42:f3", ip: "192.168.50.238"}
	I0717 19:33:11.997213  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Reserved static IP address: 192.168.50.238
	I0717 19:33:11.997228  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Waiting for SSH to be available...
	I0717 19:33:11.997244  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Getting to WaitForSSH function...
	I0717 19:33:11.999193  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999538  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:11.999564  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:11.999654  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH client type: external
	I0717 19:33:11.999689  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa (-rw-------)
	I0717 19:33:11.999718  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:11.999733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | About to run SSH command:
	I0717 19:33:11.999751  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | exit 0
	I0717 19:33:12.124608  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:12.125041  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetConfigRaw
	I0717 19:33:12.125695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.128263  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128651  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.128683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.128911  459447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/config.json ...
	I0717 19:33:12.129169  459447 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:12.129202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:12.129412  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.131942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132259  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.132286  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.132464  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.132666  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.132847  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.133004  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.133213  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.133470  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.133484  459447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:12.250371  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:12.250406  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250672  459447 buildroot.go:166] provisioning hostname "default-k8s-diff-port-378944"
	I0717 19:33:12.250700  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.250891  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.253509  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.253895  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.253929  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.254116  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.254301  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254467  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.254659  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.254809  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.255033  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.255048  459447 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-378944 && echo "default-k8s-diff-port-378944" | sudo tee /etc/hostname
	I0717 19:33:12.386839  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-378944
	
	I0717 19:33:12.386875  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.390265  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390716  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.390758  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.390942  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.391165  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391397  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.391593  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.391800  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.392028  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.392055  459447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-378944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-378944/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-378944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:12.510012  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:12.510080  459447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:12.510118  459447 buildroot.go:174] setting up certificates
	I0717 19:33:12.510139  459447 provision.go:84] configureAuth start
	I0717 19:33:12.510154  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetMachineName
	I0717 19:33:12.510469  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:12.513360  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513713  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.513756  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.513840  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.516188  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516606  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.516643  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.516778  459447 provision.go:143] copyHostCerts
	I0717 19:33:12.516867  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:12.516887  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:12.516946  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:12.517049  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:12.517060  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:12.517081  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:12.517133  459447 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:12.517140  459447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:12.517157  459447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:12.517251  459447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-378944 san=[127.0.0.1 192.168.50.238 default-k8s-diff-port-378944 localhost minikube]
	I0717 19:33:12.664603  459447 provision.go:177] copyRemoteCerts
	I0717 19:33:12.664664  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:12.664692  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.667683  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668071  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.668152  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.668276  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.668477  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.668665  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.668825  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:12.759500  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:33:12.789011  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:12.817876  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:12.847651  459447 provision.go:87] duration metric: took 337.491277ms to configureAuth
	I0717 19:33:12.847684  459447 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:12.847927  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:12.848029  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:12.851001  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851460  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:12.851492  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:12.851670  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:12.851860  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852050  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:12.852269  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:12.852466  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:12.852711  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:12.852736  459447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:13.135242  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:13.135272  459447 machine.go:97] duration metric: took 1.006081548s to provisionDockerMachine
	I0717 19:33:13.135286  459447 start.go:293] postStartSetup for "default-k8s-diff-port-378944" (driver="kvm2")
	I0717 19:33:13.135300  459447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:13.135331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.135696  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:13.135731  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.138908  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139252  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.139296  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.139577  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.139797  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.139996  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.140122  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.223998  459447 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:13.228297  459447 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:13.228327  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:13.228402  459447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:13.228508  459447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:13.228631  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:13.237923  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:13.262958  459447 start.go:296] duration metric: took 127.634911ms for postStartSetup
	I0717 19:33:13.263013  459447 fix.go:56] duration metric: took 19.949222697s for fixHost
	I0717 19:33:13.263040  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.265687  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266102  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.266147  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.266274  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.266448  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266658  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.266803  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.266974  459447 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:13.267143  459447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.238 22 <nil> <nil>}
	I0717 19:33:13.267154  459447 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:13.378375  459447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244793.352700977
	
	I0717 19:33:13.378410  459447 fix.go:216] guest clock: 1721244793.352700977
	I0717 19:33:13.378423  459447 fix.go:229] Guest: 2024-07-17 19:33:13.352700977 +0000 UTC Remote: 2024-07-17 19:33:13.263019102 +0000 UTC m=+276.814321502 (delta=89.681875ms)
	I0717 19:33:13.378449  459447 fix.go:200] guest clock delta is within tolerance: 89.681875ms
	I0717 19:33:13.378455  459447 start.go:83] releasing machines lock for "default-k8s-diff-port-378944", held for 20.064692595s
	I0717 19:33:13.378490  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.378818  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:13.382250  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382663  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.382697  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.382819  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383336  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383515  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:33:13.383640  459447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:13.383699  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.383782  459447 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:13.383808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:33:13.386565  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386802  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.386971  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387206  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387255  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:13.387280  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:13.387377  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387517  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:33:13.387595  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387695  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:33:13.387769  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.387822  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:33:13.387963  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:33:13.491993  459447 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:13.498224  459447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:13.651601  459447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:13.659061  459447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:13.659131  459447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:13.679137  459447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:13.679172  459447 start.go:495] detecting cgroup driver to use...
	I0717 19:33:13.679244  459447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:13.700173  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:13.713284  459447 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:13.713345  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:13.727665  459447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:13.741270  459447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:13.850771  459447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:14.014484  459447 docker.go:233] disabling docker service ...
	I0717 19:33:14.014573  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:14.034049  459447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:14.051903  459447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:14.176188  459447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:14.339288  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:14.354934  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:14.376713  459447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:14.376781  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.387318  459447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:14.387395  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.401869  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.414206  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.426803  459447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:14.437992  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.448554  459447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.467390  459447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:14.478878  459447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:14.488552  459447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:14.488623  459447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:14.501075  459447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:14.511085  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:14.673591  459447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:14.812878  459447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:14.812974  459447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:14.818074  459447 start.go:563] Will wait 60s for crictl version
	I0717 19:33:14.818143  459447 ssh_runner.go:195] Run: which crictl
	I0717 19:33:14.822116  459447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:14.861763  459447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:14.861843  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.891729  459447 ssh_runner.go:195] Run: crio --version
	I0717 19:33:14.925638  459447 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:14.927088  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetIP
	I0717 19:33:14.930542  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931022  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:33:14.931068  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:33:14.931326  459447 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:14.936085  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:14.949590  459447 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:14.949747  459447 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:14.949875  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:14.991945  459447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:14.992031  459447 ssh_runner.go:195] Run: which lz4
	I0717 19:33:14.996373  459447 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:15.000840  459447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:15.000875  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:13.406372  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .Start
	I0717 19:33:13.406519  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring networks are active...
	I0717 19:33:13.407255  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network default is active
	I0717 19:33:13.407627  459741 main.go:141] libmachine: (old-k8s-version-998147) Ensuring network mk-old-k8s-version-998147 is active
	I0717 19:33:13.408062  459741 main.go:141] libmachine: (old-k8s-version-998147) Getting domain xml...
	I0717 19:33:13.408909  459741 main.go:141] libmachine: (old-k8s-version-998147) Creating domain...
	I0717 19:33:14.690306  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting to get IP...
	I0717 19:33:14.691339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.691802  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.691860  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.691788  460739 retry.go:31] will retry after 292.702678ms: waiting for machine to come up
	I0717 19:33:14.986450  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:14.986962  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:14.986987  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:14.986940  460739 retry.go:31] will retry after 251.722663ms: waiting for machine to come up
	I0717 19:33:15.240732  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.241343  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.241374  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.241290  460739 retry.go:31] will retry after 352.774498ms: waiting for machine to come up
	I0717 19:33:15.596176  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:15.596833  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:15.596859  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:15.596740  460739 retry.go:31] will retry after 570.542375ms: waiting for machine to come up
	I0717 19:33:16.168613  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.169103  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.169125  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.169061  460739 retry.go:31] will retry after 505.770507ms: waiting for machine to come up
	I0717 19:33:12.633596  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:12.674417  459147 api_server.go:72] duration metric: took 1.041511526s to wait for apiserver process to appear ...
	I0717 19:33:12.674443  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:12.674473  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:12.674950  459147 api_server.go:269] stopped: https://192.168.61.66:8443/healthz: Get "https://192.168.61.66:8443/healthz": dial tcp 192.168.61.66:8443: connect: connection refused
	I0717 19:33:13.174575  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.167465  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.167503  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.167518  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.195663  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.195695  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.195712  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.203849  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:16.203880  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:16.674535  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:16.681650  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:16.681679  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.174938  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.186827  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.186890  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:17.674682  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:17.680814  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:17.680865  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.175463  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.181547  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.181576  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:18.675166  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:18.681507  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:18.681552  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.174630  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.183370  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:19.183416  459147 api_server.go:103] status: https://192.168.61.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:19.674583  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:33:19.682432  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:33:19.691489  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:33:19.691518  459147 api_server.go:131] duration metric: took 7.017066476s to wait for apiserver health ...
	I0717 19:33:19.691534  459147 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.691542  459147 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.693575  459147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:16.494615  459447 crio.go:462] duration metric: took 1.498275118s to copy over tarball
	I0717 19:33:16.494697  459447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:18.869018  459447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37428331s)
	I0717 19:33:18.869052  459447 crio.go:469] duration metric: took 2.374406548s to extract the tarball
	I0717 19:33:18.869063  459447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:18.911073  459447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:18.952704  459447 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:18.952731  459447 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:18.952740  459447 kubeadm.go:934] updating node { 192.168.50.238 8444 v1.30.2 crio true true} ...
	I0717 19:33:18.952871  459447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-378944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:18.952961  459447 ssh_runner.go:195] Run: crio config
	I0717 19:33:19.004936  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:19.004962  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:19.004976  459447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:19.004997  459447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.238 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-378944 NodeName:default-k8s-diff-port-378944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:19.005127  459447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.238
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-378944"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:19.005190  459447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:19.018466  459447 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:19.018532  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:19.030706  459447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 19:33:19.050125  459447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:19.066411  459447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 19:33:19.083019  459447 ssh_runner.go:195] Run: grep 192.168.50.238	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:19.086956  459447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:19.098483  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:19.219538  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:19.240712  459447 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944 for IP: 192.168.50.238
	I0717 19:33:19.240760  459447 certs.go:194] generating shared ca certs ...
	I0717 19:33:19.240784  459447 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:19.240971  459447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:19.241029  459447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:19.241046  459447 certs.go:256] generating profile certs ...
	I0717 19:33:19.241147  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/client.key
	I0717 19:33:19.241232  459447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key.e4ed83d1
	I0717 19:33:19.241292  459447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key
	I0717 19:33:19.241430  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:19.241472  459447 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:19.241488  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:19.241527  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:19.241563  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:19.241599  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:19.241670  459447 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:19.242447  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:19.274950  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:19.305226  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:19.348027  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:19.384636  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 19:33:19.415615  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:19.443553  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:19.477731  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/default-k8s-diff-port-378944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:19.509828  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:19.536409  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:19.562482  459447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:19.586980  459447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:19.603021  459447 ssh_runner.go:195] Run: openssl version
	I0717 19:33:19.608707  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:19.619272  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624082  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.624144  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:19.630085  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:19.640930  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:19.651717  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656207  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.656265  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:19.662211  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:19.672893  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:19.686880  459447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691831  459447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.691883  459447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:19.699526  459447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:19.712458  459447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:19.717815  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:19.726172  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:19.732924  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:19.739322  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:19.749452  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:19.756136  459447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:19.763812  459447 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-378944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-378944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:19.763936  459447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:19.763998  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.807197  459447 cri.go:89] found id: ""
	I0717 19:33:19.807303  459447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:19.819547  459447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:19.819577  459447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:19.819652  459447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:19.832162  459447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:19.833260  459447 kubeconfig.go:125] found "default-k8s-diff-port-378944" server: "https://192.168.50.238:8444"
	I0717 19:33:19.835685  459447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:19.849027  459447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.238
	I0717 19:33:19.849077  459447 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:19.849094  459447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:19.849182  459447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:19.893260  459447 cri.go:89] found id: ""
	I0717 19:33:19.893337  459447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:19.910254  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:19.920017  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:19.920039  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:19.920093  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:33:19.929144  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:19.929212  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:19.938461  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:33:19.947172  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:19.947242  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:19.956774  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.965778  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:19.965832  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:19.975529  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:33:19.984977  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:19.985037  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:19.994548  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:20.003758  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:20.326183  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.077120  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.274281  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.372150  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:21.472510  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:21.472619  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:16.676221  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:16.676783  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:16.676810  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:16.676699  460739 retry.go:31] will retry after 789.027841ms: waiting for machine to come up
	I0717 19:33:17.467899  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:17.468360  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:17.468388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:17.468307  460739 retry.go:31] will retry after 851.039047ms: waiting for machine to come up
	I0717 19:33:18.321307  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:18.321848  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:18.321877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:18.321790  460739 retry.go:31] will retry after 1.177722997s: waiting for machine to come up
	I0717 19:33:19.501191  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:19.501846  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:19.501877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:19.501754  460739 retry.go:31] will retry after 1.20353732s: waiting for machine to come up
	I0717 19:33:20.707223  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:20.707681  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:20.707715  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:20.707620  460739 retry.go:31] will retry after 2.05955161s: waiting for machine to come up
	I0717 19:33:19.694884  459147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:19.710519  459147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:19.732437  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:19.743619  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:19.743647  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:19.743657  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:19.743667  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:19.743681  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:19.743688  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:33:19.743698  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:19.743711  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:19.743718  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:33:19.743725  459147 system_pods.go:74] duration metric: took 11.261865ms to wait for pod list to return data ...
	I0717 19:33:19.743742  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:19.749108  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:19.749135  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:19.749163  459147 node_conditions.go:105] duration metric: took 5.414531ms to run NodePressure ...
	I0717 19:33:19.749183  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:22.151017  459147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (2.401804862s)
	I0717 19:33:22.151065  459147 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158240  459147 kubeadm.go:739] kubelet initialised
	I0717 19:33:22.158277  459147 kubeadm.go:740] duration metric: took 7.198956ms waiting for restarted kubelet to initialise ...
	I0717 19:33:22.158298  459147 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:22.164783  459147 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.174103  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174465  459147 pod_ready.go:81] duration metric: took 9.568158ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.174513  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.174544  459147 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.184692  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184804  459147 pod_ready.go:81] duration metric: took 10.23708ms for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.184862  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "etcd-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.184891  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.193029  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193143  459147 pod_ready.go:81] duration metric: took 8.227095ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.193175  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-apiserver-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.193234  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.200916  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201017  459147 pod_ready.go:81] duration metric: took 7.740745ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.201047  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.201081  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.555554  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555590  459147 pod_ready.go:81] duration metric: took 354.475367ms for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.555603  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-proxy-x85f5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.555612  459147 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:22.977850  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977889  459147 pod_ready.go:81] duration metric: took 422.268041ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:22.977904  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "kube-scheduler-no-preload-713715" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:22.977913  459147 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:23.355727  459147 pod_ready.go:97] node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355765  459147 pod_ready.go:81] duration metric: took 377.839773ms for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:23.355778  459147 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-713715" hosting pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:23.355787  459147 pod_ready.go:38] duration metric: took 1.197476636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:23.355807  459147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:33:23.369763  459147 ops.go:34] apiserver oom_adj: -16
	I0717 19:33:23.369789  459147 kubeadm.go:597] duration metric: took 13.319602224s to restartPrimaryControlPlane
	I0717 19:33:23.369801  459147 kubeadm.go:394] duration metric: took 13.381501456s to StartCluster
	I0717 19:33:23.369825  459147 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.369925  459147 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:23.371364  459147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:23.371643  459147 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:33:23.371763  459147 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:33:23.371851  459147 addons.go:69] Setting storage-provisioner=true in profile "no-preload-713715"
	I0717 19:33:23.371902  459147 addons.go:234] Setting addon storage-provisioner=true in "no-preload-713715"
	I0717 19:33:23.371905  459147 config.go:182] Loaded profile config "no-preload-713715": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0717 19:33:23.371915  459147 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:33:23.371904  459147 addons.go:69] Setting default-storageclass=true in profile "no-preload-713715"
	I0717 19:33:23.371921  459147 addons.go:69] Setting metrics-server=true in profile "no-preload-713715"
	I0717 19:33:23.371949  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.371963  459147 addons.go:234] Setting addon metrics-server=true in "no-preload-713715"
	I0717 19:33:23.371962  459147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-713715"
	W0717 19:33:23.371973  459147 addons.go:243] addon metrics-server should already be in state true
	I0717 19:33:23.372010  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.372248  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372283  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372354  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372363  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.372380  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.372466  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.373392  459147 out.go:177] * Verifying Kubernetes components...
	I0717 19:33:23.374639  459147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:23.391842  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45469
	I0717 19:33:23.391844  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0717 19:33:23.392376  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392449  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.392909  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.392934  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393266  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.393283  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.393316  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.393673  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.394050  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394066  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.394279  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.394317  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.413449  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36467
	I0717 19:33:23.413977  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.414416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.414429  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.414535  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0717 19:33:23.414847  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.415050  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.415439  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33637
	I0717 19:33:23.415603  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416098  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.416416  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.416442  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.416782  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.416860  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.417110  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.417129  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.417169  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.417631  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.417898  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.419162  459147 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:33:23.419540  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.420437  459147 addons.go:234] Setting addon default-storageclass=true in "no-preload-713715"
	W0717 19:33:23.420461  459147 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:33:23.420531  459147 host.go:66] Checking if "no-preload-713715" exists ...
	I0717 19:33:23.420670  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:33:23.420690  459147 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:33:23.420710  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.420935  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.420987  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.421482  459147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:23.422876  459147 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.422895  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:33:23.422914  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.424665  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425387  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.425596  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.425648  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.425860  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.426032  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.426224  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.426508  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.426884  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.426912  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.427019  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.427204  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.427375  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.427536  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.440935  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0717 19:33:23.441405  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.442015  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.442036  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.442449  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.443045  459147 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:23.443086  459147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:23.462722  459147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0717 19:33:23.463099  459147 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:23.463642  459147 main.go:141] libmachine: Using API Version  1
	I0717 19:33:23.463666  459147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:23.464015  459147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:23.464302  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetState
	I0717 19:33:23.465945  459147 main.go:141] libmachine: (no-preload-713715) Calling .DriverName
	I0717 19:33:23.466153  459147 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.466168  459147 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:33:23.466187  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHHostname
	I0717 19:33:23.469235  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469665  459147 main.go:141] libmachine: (no-preload-713715) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:fc:38", ip: ""} in network mk-no-preload-713715: {Iface:virbr3 ExpiryTime:2024-07-17 20:32:44 +0000 UTC Type:0 Mac:52:54:00:9e:fc:38 Iaid: IPaddr:192.168.61.66 Prefix:24 Hostname:no-preload-713715 Clientid:01:52:54:00:9e:fc:38}
	I0717 19:33:23.469690  459147 main.go:141] libmachine: (no-preload-713715) DBG | domain no-preload-713715 has defined IP address 192.168.61.66 and MAC address 52:54:00:9e:fc:38 in network mk-no-preload-713715
	I0717 19:33:23.469961  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHPort
	I0717 19:33:23.470125  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHKeyPath
	I0717 19:33:23.470263  459147 main.go:141] libmachine: (no-preload-713715) Calling .GetSSHUsername
	I0717 19:33:23.470380  459147 sshutil.go:53] new ssh client: &{IP:192.168.61.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/no-preload-713715/id_rsa Username:docker}
	I0717 19:33:23.604321  459147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:23.631723  459147 node_ready.go:35] waiting up to 6m0s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:23.691508  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:33:23.691839  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:33:23.870407  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:33:23.870440  459147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:33:23.962828  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:33:23.962862  459147 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:33:24.048413  459147 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:24.048458  459147 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:33:24.180828  459147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:33:25.337869  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.645994421s)
	I0717 19:33:25.337928  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.337939  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.338245  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.338260  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.338267  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.338279  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.340140  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.340158  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.340163  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.341608  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.650024823s)
	I0717 19:33:25.341659  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.341673  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.341991  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.342008  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.342052  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.342072  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.342087  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.343152  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.343174  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.374730  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.374764  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.375093  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.375192  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.375214  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.648979  459147 node_ready.go:53] node "no-preload-713715" has status "Ready":"False"
	I0717 19:33:25.756694  459147 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.575723552s)
	I0717 19:33:25.756793  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.756809  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757133  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757197  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757210  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757222  459147 main.go:141] libmachine: Making call to close driver server
	I0717 19:33:25.757231  459147 main.go:141] libmachine: (no-preload-713715) Calling .Close
	I0717 19:33:25.757463  459147 main.go:141] libmachine: (no-preload-713715) DBG | Closing plugin on server side
	I0717 19:33:25.757496  459147 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:33:25.757508  459147 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:33:25.757518  459147 addons.go:475] Verifying addon metrics-server=true in "no-preload-713715"
	I0717 19:33:25.760056  459147 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:33:21.973023  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.473773  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:22.494696  459447 api_server.go:72] duration metric: took 1.022184833s to wait for apiserver process to appear ...
	I0717 19:33:22.494730  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:33:22.494756  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:22.495278  459447 api_server.go:269] stopped: https://192.168.50.238:8444/healthz: Get "https://192.168.50.238:8444/healthz": dial tcp 192.168.50.238:8444: connect: connection refused
	I0717 19:33:22.994814  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.523793  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.523836  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.523861  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.572664  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:33:25.572703  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:33:25.994910  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:25.999901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:25.999941  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:22.769700  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:22.770437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:22.770462  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:22.770379  460739 retry.go:31] will retry after 2.380645077s: waiting for machine to come up
	I0717 19:33:25.152531  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:25.153124  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:25.153154  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:25.152995  460739 retry.go:31] will retry after 2.594173577s: waiting for machine to come up
	I0717 19:33:25.761158  459147 addons.go:510] duration metric: took 2.389396179s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:33:26.636593  459147 node_ready.go:49] node "no-preload-713715" has status "Ready":"True"
	I0717 19:33:26.636631  459147 node_ready.go:38] duration metric: took 3.004871258s for node "no-preload-713715" to be "Ready" ...
	I0717 19:33:26.636647  459147 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:26.645025  459147 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657588  459147 pod_ready.go:92] pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:26.657621  459147 pod_ready.go:81] duration metric: took 12.564266ms for pod "coredns-5cfdc65f69-hk8t7" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.657643  459147 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:26.495865  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:26.501901  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:26.501948  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:26.995379  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.007246  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.007293  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.495657  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:27.500340  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:27.500376  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:27.995477  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.001272  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.001311  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.495106  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.499745  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:33:28.499785  459447 api_server.go:103] status: https://192.168.50.238:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:33:28.994956  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:33:28.999368  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:33:29.005912  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:33:29.005941  459447 api_server.go:131] duration metric: took 6.511204058s to wait for apiserver health ...
	I0717 19:33:29.005952  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:33:29.005958  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:29.007962  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:33:29.009467  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:33:29.020044  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:33:29.039591  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:33:29.049534  459447 system_pods.go:59] 8 kube-system pods found
	I0717 19:33:29.049575  459447 system_pods.go:61] "coredns-7db6d8ff4d-zrllj" [a343d67b-7bfe-4433-a6a0-dd129f622484] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:33:29.049585  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [8b73f940-3131-4c49-88a8-909e448a17fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:33:29.049592  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [4368acf5-fcf0-4bb1-8518-dc883a3ad94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:33:29.049600  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [a9dce074-19b1-4375-bb51-2fa3a7e628a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:33:29.049605  459447 system_pods.go:61] "kube-proxy-qq6gq" [7cd51f2c-1d5d-4376-8685-a4912f158995] Running
	I0717 19:33:29.049609  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [2889aa80-5d65-485f-b4ef-396e76a40a80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:33:29.049617  459447 system_pods.go:61] "metrics-server-569cc877fc-7rl9d" [217e917f-6179-4b21-baed-7293ef9f6fc2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:33:29.049621  459447 system_pods.go:61] "storage-provisioner" [fc434634-e675-4df7-8df2-330e3f2cf36b] Running
	I0717 19:33:29.049628  459447 system_pods.go:74] duration metric: took 10.013687ms to wait for pod list to return data ...
	I0717 19:33:29.049640  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:33:29.053279  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:33:29.053306  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:33:29.053318  459447 node_conditions.go:105] duration metric: took 3.672966ms to run NodePressure ...
	I0717 19:33:29.053336  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:29.329460  459447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335545  459447 kubeadm.go:739] kubelet initialised
	I0717 19:33:29.335570  459447 kubeadm.go:740] duration metric: took 6.082515ms waiting for restarted kubelet to initialise ...
	I0717 19:33:29.335587  459447 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:33:29.343632  459447 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.348772  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348798  459447 pod_ready.go:81] duration metric: took 5.144899ms for pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.348810  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "coredns-7db6d8ff4d-zrllj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.348820  459447 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.354355  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354386  459447 pod_ready.go:81] duration metric: took 5.550767ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.354398  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.354410  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:29.359416  459447 pod_ready.go:97] node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359433  459447 pod_ready.go:81] duration metric: took 5.007721ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	E0717 19:33:29.359442  459447 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-378944" hosting pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-378944" has status "Ready":"False"
	I0717 19:33:29.359448  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.369477  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:27.748311  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:27.748683  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | unable to find current IP address of domain old-k8s-version-998147 in network mk-old-k8s-version-998147
	I0717 19:33:27.748710  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | I0717 19:33:27.748647  460739 retry.go:31] will retry after 3.034683519s: waiting for machine to come up
	I0717 19:33:30.784524  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.784995  459741 main.go:141] libmachine: (old-k8s-version-998147) Found IP for machine: 192.168.72.208
	I0717 19:33:30.785018  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserving static IP address...
	I0717 19:33:30.785042  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has current primary IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.785437  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.785462  459741 main.go:141] libmachine: (old-k8s-version-998147) Reserved static IP address: 192.168.72.208
	I0717 19:33:30.785478  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | skip adding static IP to network mk-old-k8s-version-998147 - found existing host DHCP lease matching {name: "old-k8s-version-998147", mac: "52:54:00:e7:d4:91", ip: "192.168.72.208"}
	I0717 19:33:30.785490  459741 main.go:141] libmachine: (old-k8s-version-998147) Waiting for SSH to be available...
	I0717 19:33:30.785502  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Getting to WaitForSSH function...
	I0717 19:33:30.787861  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788286  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.788339  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.788506  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH client type: external
	I0717 19:33:30.788535  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa (-rw-------)
	I0717 19:33:30.788575  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:30.788592  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | About to run SSH command:
	I0717 19:33:30.788605  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | exit 0
	I0717 19:33:30.916827  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:30.917232  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetConfigRaw
	I0717 19:33:30.917949  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:30.920672  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921033  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.921069  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.921321  459741 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/config.json ...
	I0717 19:33:30.921518  459741 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:30.921538  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:30.921777  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:30.923995  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924337  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:30.924364  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:30.924515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:30.924708  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.924894  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:30.925021  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:30.925229  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:30.925417  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:30.925428  459741 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:31.037218  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:31.037249  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037537  459741 buildroot.go:166] provisioning hostname "old-k8s-version-998147"
	I0717 19:33:31.037569  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.037782  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.040877  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041209  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.041252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.041382  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.041577  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041764  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.041940  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.042121  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.042313  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.042329  459741 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-998147 && echo "old-k8s-version-998147" | sudo tee /etc/hostname
	I0717 19:33:31.169368  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-998147
	
	I0717 19:33:31.169401  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.172170  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172475  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.172520  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.172739  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.172950  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173133  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.173321  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.173557  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.173809  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.173828  459741 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-998147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-998147/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-998147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:31.293920  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:31.293957  459741 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:31.293997  459741 buildroot.go:174] setting up certificates
	I0717 19:33:31.294010  459741 provision.go:84] configureAuth start
	I0717 19:33:31.294022  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetMachineName
	I0717 19:33:31.294383  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:31.297356  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297766  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.297800  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.297961  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.300159  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300454  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.300507  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.300638  459741 provision.go:143] copyHostCerts
	I0717 19:33:31.300707  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:31.300721  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:31.300787  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:31.300917  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:31.300929  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:31.300962  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:31.301038  459741 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:31.301046  459741 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:31.301066  459741 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:31.301112  459741 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-998147 san=[127.0.0.1 192.168.72.208 localhost minikube old-k8s-version-998147]
	I0717 19:33:32.217560  459061 start.go:364] duration metric: took 53.370503448s to acquireMachinesLock for "embed-certs-637675"
	I0717 19:33:32.217640  459061 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:33:32.217653  459061 fix.go:54] fixHost starting: 
	I0717 19:33:32.218221  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:33:32.218273  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:33:32.236152  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I0717 19:33:32.236693  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:33:32.237234  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:33:32.237261  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:33:32.237630  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:33:32.237827  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:32.237981  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:33:32.239582  459061 fix.go:112] recreateIfNeeded on embed-certs-637675: state=Stopped err=<nil>
	I0717 19:33:32.239630  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	W0717 19:33:32.239777  459061 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:33:32.241662  459061 out.go:177] * Restarting existing kvm2 VM for "embed-certs-637675" ...
	I0717 19:33:28.164383  459147 pod_ready.go:92] pod "etcd-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.164416  459147 pod_ready.go:81] duration metric: took 1.506759615s for pod "etcd-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.164430  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169329  459147 pod_ready.go:92] pod "kube-apiserver-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.169359  459147 pod_ready.go:81] duration metric: took 4.920897ms for pod "kube-apiserver-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.169374  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174231  459147 pod_ready.go:92] pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:28.174256  459147 pod_ready.go:81] duration metric: took 4.874197ms for pod "kube-controller-manager-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:28.174270  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:30.181752  459147 pod_ready.go:102] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:32.181095  459147 pod_ready.go:92] pod "kube-proxy-x85f5" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.181128  459147 pod_ready.go:81] duration metric: took 4.006849577s for pod "kube-proxy-x85f5" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.181146  459147 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186196  459147 pod_ready.go:92] pod "kube-scheduler-no-preload-713715" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:32.186226  459147 pod_ready.go:81] duration metric: took 5.071066ms for pod "kube-scheduler-no-preload-713715" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:32.186240  459147 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:31.522479  459741 provision.go:177] copyRemoteCerts
	I0717 19:33:31.522546  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:31.522602  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.525768  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526171  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.526203  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.526344  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.526551  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.526724  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.526904  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:31.612117  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 19:33:31.638832  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:31.664757  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:31.689941  459741 provision.go:87] duration metric: took 395.916596ms to configureAuth
	I0717 19:33:31.689975  459741 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:31.690190  459741 config.go:182] Loaded profile config "old-k8s-version-998147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:33:31.690265  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.692837  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.693234  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.693449  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.693671  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.693826  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.694059  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.694245  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:31.694413  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:31.694429  459741 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:31.974825  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:31.974852  459741 machine.go:97] duration metric: took 1.053320969s to provisionDockerMachine
	I0717 19:33:31.974865  459741 start.go:293] postStartSetup for "old-k8s-version-998147" (driver="kvm2")
	I0717 19:33:31.974875  459741 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:31.974896  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:31.975219  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:31.975248  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:31.978388  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.978767  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:31.978799  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:31.979026  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:31.979228  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:31.979423  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:31.979548  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.063516  459741 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:32.067826  459741 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:32.067854  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:32.067935  459741 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:32.068032  459741 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:32.068178  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:32.077672  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:32.102750  459741 start.go:296] duration metric: took 127.86801ms for postStartSetup
	I0717 19:33:32.102793  459741 fix.go:56] duration metric: took 18.724124854s for fixHost
	I0717 19:33:32.102816  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.105928  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106324  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.106349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.106498  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.106750  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.106912  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.107091  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.107267  459741 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:32.107435  459741 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I0717 19:33:32.107447  459741 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:32.217378  459741 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244812.173823160
	
	I0717 19:33:32.217412  459741 fix.go:216] guest clock: 1721244812.173823160
	I0717 19:33:32.217424  459741 fix.go:229] Guest: 2024-07-17 19:33:32.17382316 +0000 UTC Remote: 2024-07-17 19:33:32.102798084 +0000 UTC m=+260.639424711 (delta=71.025076ms)
	I0717 19:33:32.217462  459741 fix.go:200] guest clock delta is within tolerance: 71.025076ms
	I0717 19:33:32.217476  459741 start.go:83] releasing machines lock for "old-k8s-version-998147", held for 18.838841423s
	I0717 19:33:32.217515  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.217908  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:32.221349  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221669  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.221701  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.221823  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222444  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222647  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .DriverName
	I0717 19:33:32.222744  459741 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:32.222799  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.222935  459741 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:32.222963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHHostname
	I0717 19:33:32.225811  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.225842  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226180  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226207  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226235  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:32.226252  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:32.226347  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226651  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226654  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHPort
	I0717 19:33:32.226818  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHKeyPath
	I0717 19:33:32.226911  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.226963  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetSSHUsername
	I0717 19:33:32.227238  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.227243  459741 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/old-k8s-version-998147/id_rsa Username:docker}
	I0717 19:33:32.331645  459741 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:32.338968  459741 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:32.491164  459741 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:32.498407  459741 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:32.498472  459741 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:32.515829  459741 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:32.515858  459741 start.go:495] detecting cgroup driver to use...
	I0717 19:33:32.515926  459741 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:32.534094  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:32.549874  459741 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:32.549938  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:32.565389  459741 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:32.580187  459741 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:32.709855  459741 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:32.889734  459741 docker.go:233] disabling docker service ...
	I0717 19:33:32.889804  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:32.909179  459741 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:32.923944  459741 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:33.043740  459741 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:33.174272  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:33.189545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:33.210166  459741 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:33:33.210238  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.222478  459741 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:33.222547  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.234479  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.247161  459741 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:33.258702  459741 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:33.271516  459741 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:33.282032  459741 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:33.282087  459741 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:33.296554  459741 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:33.307378  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:33.447447  459741 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:33.606295  459741 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:33.606388  459741 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:33.611193  459741 start.go:563] Will wait 60s for crictl version
	I0717 19:33:33.611252  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:33.615370  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:33.660721  459741 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:33.660803  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.695406  459741 ssh_runner.go:195] Run: crio --version
	I0717 19:33:33.727703  459741 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 19:33:32.243015  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Start
	I0717 19:33:32.243191  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring networks are active...
	I0717 19:33:32.244008  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network default is active
	I0717 19:33:32.244302  459061 main.go:141] libmachine: (embed-certs-637675) Ensuring network mk-embed-certs-637675 is active
	I0717 19:33:32.244826  459061 main.go:141] libmachine: (embed-certs-637675) Getting domain xml...
	I0717 19:33:32.245560  459061 main.go:141] libmachine: (embed-certs-637675) Creating domain...
	I0717 19:33:33.537081  459061 main.go:141] libmachine: (embed-certs-637675) Waiting to get IP...
	I0717 19:33:33.538117  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.538562  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.538630  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.538531  460929 retry.go:31] will retry after 245.180235ms: waiting for machine to come up
	I0717 19:33:33.784957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:33.785535  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:33.785567  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:33.785490  460929 retry.go:31] will retry after 353.289988ms: waiting for machine to come up
	I0717 19:33:34.141088  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.141697  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.141721  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.141637  460929 retry.go:31] will retry after 404.344963ms: waiting for machine to come up
	I0717 19:33:34.547331  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.547928  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.547956  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.547822  460929 retry.go:31] will retry after 382.194721ms: waiting for machine to come up
	I0717 19:33:34.931269  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:34.931746  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:34.931776  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:34.931653  460929 retry.go:31] will retry after 485.884671ms: waiting for machine to come up
	I0717 19:33:35.419418  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:35.419957  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:35.419991  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:35.419896  460929 retry.go:31] will retry after 598.409396ms: waiting for machine to come up
	I0717 19:33:36.019507  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.020091  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.020118  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.020041  460929 retry.go:31] will retry after 815.010839ms: waiting for machine to come up
	I0717 19:33:33.866250  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:35.869264  459447 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:33.729003  459741 main.go:141] libmachine: (old-k8s-version-998147) Calling .GetIP
	I0717 19:33:33.732254  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732730  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:d4:91", ip: ""} in network mk-old-k8s-version-998147: {Iface:virbr4 ExpiryTime:2024-07-17 20:22:53 +0000 UTC Type:0 Mac:52:54:00:e7:d4:91 Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:old-k8s-version-998147 Clientid:01:52:54:00:e7:d4:91}
	I0717 19:33:33.732761  459741 main.go:141] libmachine: (old-k8s-version-998147) DBG | domain old-k8s-version-998147 has defined IP address 192.168.72.208 and MAC address 52:54:00:e7:d4:91 in network mk-old-k8s-version-998147
	I0717 19:33:33.732992  459741 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:33.737578  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:33.751952  459741 kubeadm.go:883] updating cluster {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:33.752069  459741 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 19:33:33.752141  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:33.799085  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:33.799167  459741 ssh_runner.go:195] Run: which lz4
	I0717 19:33:33.803899  459741 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:33.808398  459741 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:33.808431  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 19:33:35.539736  459741 crio.go:462] duration metric: took 1.735871318s to copy over tarball
	I0717 19:33:35.539833  459741 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:34.210207  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.693543  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:36.837115  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:36.837531  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:36.837560  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:36.837482  460929 retry.go:31] will retry after 1.072167201s: waiting for machine to come up
	I0717 19:33:37.911591  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:37.912149  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:37.912173  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:37.912104  460929 retry.go:31] will retry after 1.782290473s: waiting for machine to come up
	I0717 19:33:39.696512  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:39.696980  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:39.697015  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:39.696923  460929 retry.go:31] will retry after 1.896567581s: waiting for machine to come up
	I0717 19:33:36.872836  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.872865  459447 pod_ready.go:81] duration metric: took 7.513409896s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.872876  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878642  459447 pod_ready.go:92] pod "kube-proxy-qq6gq" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.878665  459447 pod_ready.go:81] duration metric: took 5.782297ms for pod "kube-proxy-qq6gq" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.878673  459447 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887916  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:33:36.887943  459447 pod_ready.go:81] duration metric: took 9.259629ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:36.887957  459447 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	I0717 19:33:39.411899  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:38.677338  459741 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.137463162s)
	I0717 19:33:38.677381  459741 crio.go:469] duration metric: took 3.137607875s to extract the tarball
	I0717 19:33:38.677396  459741 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:38.721981  459741 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:38.756640  459741 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 19:33:38.756670  459741 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:33:38.756755  459741 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.756840  459741 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.756885  459741 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.756923  459741 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.756887  459741 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:33:38.756866  459741 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.756875  459741 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.757061  459741 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.758622  459741 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:38.758705  459741 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:33:38.758860  459741 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:38.758902  459741 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.758945  459741 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:38.758977  459741 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:38.759058  459741 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:38.759126  459741 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:38.947033  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:33:38.978340  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:38.989519  459741 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:33:38.989583  459741 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:33:38.989631  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.007170  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.034177  459741 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 19:33:39.034232  459741 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.034282  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.034287  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:33:39.062389  459741 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 19:33:39.062443  459741 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.062490  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.080521  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 19:33:39.080640  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:33:39.080739  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 19:33:39.101886  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.114010  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.122572  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.131514  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 19:33:39.145327  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 19:33:39.187564  459741 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 19:33:39.187685  459741 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.187756  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.192838  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.232745  459741 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 19:33:39.232807  459741 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.232822  459741 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 19:33:39.232864  459741 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.232897  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 19:33:39.232918  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.232867  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.249586  459741 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 19:33:39.249634  459741 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.249677  459741 ssh_runner.go:195] Run: which crictl
	I0717 19:33:39.280522  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 19:33:39.280616  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 19:33:39.280622  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 19:33:39.280736  459741 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 19:33:39.354545  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 19:33:39.354577  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 19:33:39.354740  459741 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 19:33:39.640493  459741 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:33:39.792919  459741 cache_images.go:92] duration metric: took 1.03622454s to LoadCachedImages
	W0717 19:33:39.793071  459741 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19282-392903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 19:33:39.793093  459741 kubeadm.go:934] updating node { 192.168.72.208 8443 v1.20.0 crio true true} ...
	I0717 19:33:39.793266  459741 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-998147 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:39.793390  459741 ssh_runner.go:195] Run: crio config
	I0717 19:33:39.854291  459741 cni.go:84] Creating CNI manager for ""
	I0717 19:33:39.854320  459741 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:39.854333  459741 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:39.854355  459741 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-998147 NodeName:old-k8s-version-998147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 19:33:39.854569  459741 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-998147"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:39.854672  459741 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 19:33:39.865802  459741 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:39.865892  459741 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:39.878728  459741 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:33:39.899402  459741 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:39.917946  459741 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 19:33:39.937916  459741 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:39.942211  459741 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:39.957083  459741 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:40.077407  459741 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:40.096211  459741 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147 for IP: 192.168.72.208
	I0717 19:33:40.096244  459741 certs.go:194] generating shared ca certs ...
	I0717 19:33:40.096269  459741 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.096511  459741 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:40.096578  459741 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:40.096592  459741 certs.go:256] generating profile certs ...
	I0717 19:33:40.096727  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/client.key
	I0717 19:33:40.096794  459741 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key.204e9011
	I0717 19:33:40.096852  459741 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key
	I0717 19:33:40.097009  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:40.097049  459741 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:40.097062  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:40.097095  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:40.097133  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:40.097161  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:40.097215  459741 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:40.097920  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:40.144174  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:40.182700  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:40.222340  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:40.259248  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 19:33:40.302619  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:33:40.335170  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:40.373447  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/old-k8s-version-998147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:33:40.409075  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:40.435692  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:40.460419  459741 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:40.492357  459741 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:40.515212  459741 ssh_runner.go:195] Run: openssl version
	I0717 19:33:40.523462  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:40.537951  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544201  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.544264  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:40.552233  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:40.567486  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:40.583035  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589287  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.589367  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:40.595802  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:40.613013  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:40.625080  459741 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630225  459741 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.630298  459741 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:40.636697  459741 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:40.647728  459741 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:40.653165  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:40.659380  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:40.666126  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:40.673361  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:40.680123  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:40.686669  459741 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:40.693569  459741 kubeadm.go:392] StartCluster: {Name:old-k8s-version-998147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-998147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:40.693682  459741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:40.693767  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.737536  459741 cri.go:89] found id: ""
	I0717 19:33:40.737637  459741 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:40.749268  459741 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:40.749292  459741 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:40.749347  459741 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:40.760298  459741 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:40.761436  459741 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-998147" does not appear in /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:33:40.762162  459741 kubeconfig.go:62] /home/jenkins/minikube-integration/19282-392903/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-998147" cluster setting kubeconfig missing "old-k8s-version-998147" context setting]
	I0717 19:33:40.763136  459741 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:40.860353  459741 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:40.871291  459741 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.208
	I0717 19:33:40.871329  459741 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:40.871348  459741 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:40.871404  459741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:40.909329  459741 cri.go:89] found id: ""
	I0717 19:33:40.909419  459741 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:40.926501  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:40.937534  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:40.937565  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:40.937640  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:40.946613  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:40.946692  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:40.956996  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:40.965988  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:40.966046  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:40.975285  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.984577  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:40.984642  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:40.994458  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:41.007766  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:41.007821  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:41.020451  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:41.034173  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:41.176766  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:38.694137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:40.694562  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:41.594983  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:41.595523  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:41.595554  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:41.595469  460929 retry.go:31] will retry after 2.022688841s: waiting for machine to come up
	I0717 19:33:43.619805  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:43.620241  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:43.620277  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:43.620212  460929 retry.go:31] will retry after 3.581051367s: waiting for machine to come up
	I0717 19:33:41.896941  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:44.394301  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:42.579917  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.403105878s)
	I0717 19:33:42.579958  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.840718  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:42.961394  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:43.055710  459741 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:33:43.055799  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:43.556468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.055954  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:44.555966  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.056266  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:45.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:46.056807  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:42.695989  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:45.194178  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.195661  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:47.205836  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:47.206321  459061 main.go:141] libmachine: (embed-certs-637675) DBG | unable to find current IP address of domain embed-certs-637675 in network mk-embed-certs-637675
	I0717 19:33:47.206343  459061 main.go:141] libmachine: (embed-certs-637675) DBG | I0717 19:33:47.206278  460929 retry.go:31] will retry after 4.261122451s: waiting for machine to come up
	I0717 19:33:46.894466  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:49.395152  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:46.555904  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.056616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:47.556787  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.056072  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:48.555979  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.056074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.556619  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.056758  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:50.555862  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:51.055991  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:49.692660  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.693700  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.470426  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470961  459061 main.go:141] libmachine: (embed-certs-637675) Found IP for machine: 192.168.39.140
	I0717 19:33:51.470987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has current primary IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.470994  459061 main.go:141] libmachine: (embed-certs-637675) Reserving static IP address...
	I0717 19:33:51.471473  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.471502  459061 main.go:141] libmachine: (embed-certs-637675) Reserved static IP address: 192.168.39.140
	I0717 19:33:51.471530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | skip adding static IP to network mk-embed-certs-637675 - found existing host DHCP lease matching {name: "embed-certs-637675", mac: "52:54:00:33:d5:fa", ip: "192.168.39.140"}
	I0717 19:33:51.471548  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Getting to WaitForSSH function...
	I0717 19:33:51.471563  459061 main.go:141] libmachine: (embed-certs-637675) Waiting for SSH to be available...
	I0717 19:33:51.474038  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474414  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.474445  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.474588  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH client type: external
	I0717 19:33:51.474617  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Using SSH private key: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa (-rw-------)
	I0717 19:33:51.474655  459061 main.go:141] libmachine: (embed-certs-637675) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:33:51.474675  459061 main.go:141] libmachine: (embed-certs-637675) DBG | About to run SSH command:
	I0717 19:33:51.474699  459061 main.go:141] libmachine: (embed-certs-637675) DBG | exit 0
	I0717 19:33:51.604737  459061 main.go:141] libmachine: (embed-certs-637675) DBG | SSH cmd err, output: <nil>: 
	I0717 19:33:51.605100  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetConfigRaw
	I0717 19:33:51.605831  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.608613  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.608977  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.609023  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.609289  459061 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/config.json ...
	I0717 19:33:51.609523  459061 machine.go:94] provisionDockerMachine start ...
	I0717 19:33:51.609557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:51.609778  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.611949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612259  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.612295  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.612408  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.612598  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612765  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.612911  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.613071  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.613293  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.613307  459061 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:33:51.716785  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:33:51.716815  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717101  459061 buildroot.go:166] provisioning hostname "embed-certs-637675"
	I0717 19:33:51.717136  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.717318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.719807  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720137  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.720163  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.720315  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.720545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720719  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.720892  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.721086  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.721258  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.721271  459061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-637675 && echo "embed-certs-637675" | sudo tee /etc/hostname
	I0717 19:33:51.844077  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-637675
	
	I0717 19:33:51.844111  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.847369  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.847949  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.847987  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.848185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:51.848361  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848523  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:51.848703  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:51.848912  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:51.849127  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:51.849145  459061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-637675' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-637675/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-637675' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:33:51.961570  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:33:51.961608  459061 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19282-392903/.minikube CaCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19282-392903/.minikube}
	I0717 19:33:51.961632  459061 buildroot.go:174] setting up certificates
	I0717 19:33:51.961644  459061 provision.go:84] configureAuth start
	I0717 19:33:51.961658  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetMachineName
	I0717 19:33:51.961931  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:51.964788  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965123  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.965150  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.965303  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:51.967517  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.967881  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:51.967910  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:51.968060  459061 provision.go:143] copyHostCerts
	I0717 19:33:51.968129  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem, removing ...
	I0717 19:33:51.968140  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem
	I0717 19:33:51.968203  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/ca.pem (1078 bytes)
	I0717 19:33:51.968333  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem, removing ...
	I0717 19:33:51.968344  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem
	I0717 19:33:51.968371  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/cert.pem (1123 bytes)
	I0717 19:33:51.968546  459061 exec_runner.go:144] found /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem, removing ...
	I0717 19:33:51.968558  459061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem
	I0717 19:33:51.968605  459061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19282-392903/.minikube/key.pem (1675 bytes)
	I0717 19:33:51.968692  459061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem org=jenkins.embed-certs-637675 san=[127.0.0.1 192.168.39.140 embed-certs-637675 localhost minikube]
	I0717 19:33:52.257323  459061 provision.go:177] copyRemoteCerts
	I0717 19:33:52.257408  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:33:52.257443  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.260461  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.260873  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.260897  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.261094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.261307  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.261485  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.261619  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.347197  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:33:52.372509  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:33:52.397643  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:33:52.421482  459061 provision.go:87] duration metric: took 459.823049ms to configureAuth
	I0717 19:33:52.421511  459061 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:33:52.421712  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:33:52.421789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.424390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.424796  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.424827  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.425027  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.425221  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425363  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.425502  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.425661  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.425872  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.425902  459061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:33:52.699426  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:33:52.699458  459061 machine.go:97] duration metric: took 1.089918524s to provisionDockerMachine
	I0717 19:33:52.699470  459061 start.go:293] postStartSetup for "embed-certs-637675" (driver="kvm2")
	I0717 19:33:52.699483  459061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:33:52.699505  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.699888  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:33:52.699943  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.703018  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703417  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.703463  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.703693  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.704007  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.704318  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.704519  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.791925  459061 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:33:52.795954  459061 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:33:52.795980  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/addons for local assets ...
	I0717 19:33:52.796095  459061 filesync.go:126] Scanning /home/jenkins/minikube-integration/19282-392903/.minikube/files for local assets ...
	I0717 19:33:52.796191  459061 filesync.go:149] local asset: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem -> 4001712.pem in /etc/ssl/certs
	I0717 19:33:52.796308  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:33:52.805548  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:52.829531  459061 start.go:296] duration metric: took 130.04771ms for postStartSetup
	I0717 19:33:52.829569  459061 fix.go:56] duration metric: took 20.611916701s for fixHost
	I0717 19:33:52.829611  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.832274  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832744  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.832778  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.832883  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.833094  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833276  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.833448  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.833632  459061 main.go:141] libmachine: Using SSH client type: native
	I0717 19:33:52.833852  459061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0717 19:33:52.833871  459061 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:33:52.941152  459061 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721244832.915250809
	
	I0717 19:33:52.941180  459061 fix.go:216] guest clock: 1721244832.915250809
	I0717 19:33:52.941194  459061 fix.go:229] Guest: 2024-07-17 19:33:52.915250809 +0000 UTC Remote: 2024-07-17 19:33:52.829573693 +0000 UTC m=+356.572558813 (delta=85.677116ms)
	I0717 19:33:52.941221  459061 fix.go:200] guest clock delta is within tolerance: 85.677116ms
	I0717 19:33:52.941232  459061 start.go:83] releasing machines lock for "embed-certs-637675", held for 20.723622875s
	I0717 19:33:52.941257  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.941557  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:52.944096  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944498  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.944526  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.944682  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945170  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945409  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:33:52.945520  459061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:33:52.945595  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.945624  459061 ssh_runner.go:195] Run: cat /version.json
	I0717 19:33:52.945653  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:33:52.948197  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948530  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948557  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948575  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948781  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.948912  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:52.948936  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:52.948966  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949080  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:33:52.949205  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949228  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:33:52.949348  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:33:52.949352  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:52.949465  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:33:53.054206  459061 ssh_runner.go:195] Run: systemctl --version
	I0717 19:33:53.060916  459061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:33:53.204303  459061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:33:53.210204  459061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:33:53.210262  459061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:33:53.226045  459061 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:33:53.226072  459061 start.go:495] detecting cgroup driver to use...
	I0717 19:33:53.226138  459061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:33:53.243047  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:33:53.256611  459061 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:33:53.256678  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:33:53.269932  459061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:33:53.285394  459061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:33:53.412896  459061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:33:53.573675  459061 docker.go:233] disabling docker service ...
	I0717 19:33:53.573749  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:33:53.590083  459061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:33:53.603710  459061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:33:53.727530  459061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:33:53.873274  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:33:53.905871  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:33:53.926509  459061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:33:53.926583  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.937258  459061 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:33:53.937333  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.947782  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.958191  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.970004  459061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:33:53.982062  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:53.992942  459061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.011137  459061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:33:54.022170  459061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:33:54.033118  459061 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:33:54.033183  459061 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:33:54.046510  459061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:33:54.056086  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:54.203486  459061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:33:54.336557  459061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:33:54.336645  459061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:33:54.342342  459061 start.go:563] Will wait 60s for crictl version
	I0717 19:33:54.342422  459061 ssh_runner.go:195] Run: which crictl
	I0717 19:33:54.346334  459061 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:33:54.388801  459061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:33:54.388898  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.419237  459061 ssh_runner.go:195] Run: crio --version
	I0717 19:33:54.459513  459061 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 19:33:54.460727  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetIP
	I0717 19:33:54.463803  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464194  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:33:54.464235  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:33:54.464521  459061 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:33:54.469869  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:54.484510  459061 kubeadm.go:883] updating cluster {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:33:54.484680  459061 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 19:33:54.484750  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:54.530253  459061 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 19:33:54.530339  459061 ssh_runner.go:195] Run: which lz4
	I0717 19:33:54.534466  459061 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:33:54.538610  459061 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:33:54.538642  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 19:33:55.923529  459061 crio.go:462] duration metric: took 1.389095679s to copy over tarball
	I0717 19:33:55.923617  459061 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:33:51.894538  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:53.896853  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.394940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:51.556187  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.056816  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:52.555884  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.056440  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.556003  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:54.556947  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.055878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:55.556110  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:56.056460  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:53.693746  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:55.695193  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:58.139069  459061 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.215401803s)
	I0717 19:33:58.139116  459061 crio.go:469] duration metric: took 2.215553314s to extract the tarball
	I0717 19:33:58.139127  459061 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:33:58.178293  459061 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:33:58.219163  459061 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:33:58.219188  459061 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:33:58.219197  459061 kubeadm.go:934] updating node { 192.168.39.140 8443 v1.30.2 crio true true} ...
	I0717 19:33:58.219306  459061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-637675 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:33:58.219383  459061 ssh_runner.go:195] Run: crio config
	I0717 19:33:58.262906  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:33:58.262925  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:33:58.262934  459061 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 19:33:58.262957  459061 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-637675 NodeName:embed-certs-637675 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:33:58.263084  459061 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-637675"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:33:58.263147  459061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 19:33:58.273657  459061 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:33:58.273723  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:33:58.283599  459061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 19:33:58.300393  459061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:33:58.317742  459061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 19:33:58.334880  459061 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0717 19:33:58.338573  459061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:33:58.350476  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:33:58.480706  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:33:58.498116  459061 certs.go:68] Setting up /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675 for IP: 192.168.39.140
	I0717 19:33:58.498139  459061 certs.go:194] generating shared ca certs ...
	I0717 19:33:58.498161  459061 certs.go:226] acquiring lock for ca certs: {Name:mkdc95c9e649ed1b684161ab382abd0c6d5d829f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:33:58.498326  459061 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key
	I0717 19:33:58.498380  459061 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key
	I0717 19:33:58.498394  459061 certs.go:256] generating profile certs ...
	I0717 19:33:58.498518  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/client.key
	I0717 19:33:58.498580  459061 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key.c8cdbf09
	I0717 19:33:58.498853  459061 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key
	I0717 19:33:58.499016  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem (1338 bytes)
	W0717 19:33:58.499066  459061 certs.go:480] ignoring /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171_empty.pem, impossibly tiny 0 bytes
	I0717 19:33:58.499081  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:33:58.499115  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:33:58.499256  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:33:58.499299  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/certs/key.pem (1675 bytes)
	I0717 19:33:58.499435  459061 certs.go:484] found cert: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem (1708 bytes)
	I0717 19:33:58.500359  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:33:58.544981  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:33:58.588099  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:33:58.621983  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:33:58.652262  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 19:33:58.676887  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:33:58.701437  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:33:58.726502  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/embed-certs-637675/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:33:58.751839  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:33:58.777500  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/certs/400171.pem --> /usr/share/ca-certificates/400171.pem (1338 bytes)
	I0717 19:33:58.801388  459061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/ssl/certs/4001712.pem --> /usr/share/ca-certificates/4001712.pem (1708 bytes)
	I0717 19:33:58.825450  459061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:33:58.842717  459061 ssh_runner.go:195] Run: openssl version
	I0717 19:33:58.848256  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:33:58.858519  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863057  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.863130  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:33:58.869045  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:33:58.879255  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/400171.pem && ln -fs /usr/share/ca-certificates/400171.pem /etc/ssl/certs/400171.pem"
	I0717 19:33:58.890546  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895342  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:17 /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.895394  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/400171.pem
	I0717 19:33:58.901225  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/400171.pem /etc/ssl/certs/51391683.0"
	I0717 19:33:58.912043  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001712.pem && ln -fs /usr/share/ca-certificates/4001712.pem /etc/ssl/certs/4001712.pem"
	I0717 19:33:58.922557  459061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.926974  459061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:17 /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.927063  459061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001712.pem
	I0717 19:33:58.932819  459061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4001712.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:33:58.943396  459061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:33:58.947900  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:33:58.953946  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:33:58.960139  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:33:58.965932  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:33:58.971638  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:33:58.977437  459061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:33:58.983041  459061 kubeadm.go:392] StartCluster: {Name:embed-certs-637675 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-637675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:33:58.983125  459061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:33:58.983159  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.026606  459061 cri.go:89] found id: ""
	I0717 19:33:59.026700  459061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:33:59.037020  459061 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:33:59.037045  459061 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:33:59.037089  459061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:33:59.046698  459061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:33:59.047817  459061 kubeconfig.go:125] found "embed-certs-637675" server: "https://192.168.39.140:8443"
	I0717 19:33:59.049941  459061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:33:59.059451  459061 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.140
	I0717 19:33:59.059482  459061 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:33:59.059500  459061 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:33:59.059544  459061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:33:59.095066  459061 cri.go:89] found id: ""
	I0717 19:33:59.095128  459061 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:33:59.112170  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:33:59.122995  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:33:59.123014  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:33:59.123063  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:33:59.133289  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:33:59.133372  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:33:59.143515  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:33:59.152845  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:33:59.152898  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:33:59.162821  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.173290  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:33:59.173353  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:33:59.184053  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:33:59.195281  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:33:59.195345  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:33:59.205300  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:33:59.219019  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:33:59.337326  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.220304  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.451460  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.631448  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:00.701064  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:34:00.701166  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.201848  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.895830  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.394535  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:56.556934  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.055977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.556878  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.056308  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:58.556348  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.056674  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:59.556870  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.055931  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:00.555977  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.055886  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:33:57.695265  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:33:59.973534  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:02.193004  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.701254  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:01.809514  459061 api_server.go:72] duration metric: took 1.10844859s to wait for apiserver process to appear ...
	I0717 19:34:01.809547  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:34:01.809597  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:01.810183  459061 api_server.go:269] stopped: https://192.168.39.140:8443/healthz: Get "https://192.168.39.140:8443/healthz": dial tcp 192.168.39.140:8443: connect: connection refused
	I0717 19:34:02.309904  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.789701  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.789732  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.789745  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.862326  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:34:04.862359  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:34:04.862371  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:04.885715  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:04.885755  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.310281  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.314611  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.314645  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:05.810297  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:05.817458  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:05.817492  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:03.395467  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:05.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:01.556897  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.056800  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:02.556122  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.056427  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:03.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.056571  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.556144  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.056037  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:05.555875  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:06.056743  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:04.193618  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.194585  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.318694  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.318740  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:06.809794  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:06.815231  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:06.815259  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.310287  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.314865  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.314892  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:07.810489  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:07.815153  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:34:07.815184  459061 api_server.go:103] status: https://192.168.39.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:34:08.310494  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:34:08.315173  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:34:08.321509  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:34:08.321539  459061 api_server.go:131] duration metric: took 6.51198343s to wait for apiserver health ...
	I0717 19:34:08.321550  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:34:08.321558  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:34:08.323369  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:34:08.324555  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:34:08.336384  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:34:08.357196  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:34:08.373813  459061 system_pods.go:59] 8 kube-system pods found
	I0717 19:34:08.373849  459061 system_pods.go:61] "coredns-7db6d8ff4d-8brst" [aec5eaab-66a7-4221-84a1-b7967bd26cb8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:34:08.373856  459061 system_pods.go:61] "etcd-embed-certs-637675" [f2e395a3-fd1f-4a92-98ce-d6093d7b2faf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:34:08.373864  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [358154e3-59e5-4535-9e1d-ee3b9eab5464] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:34:08.373871  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [641c70ba-a6fa-4975-bdb5-727b5ba64a87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:34:08.373875  459061 system_pods.go:61] "kube-proxy-4cv66" [1a561d4e-4910-4ff0-9a1e-070e60e27cb4] Running
	I0717 19:34:08.373879  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [83f50c1c-44ca-4b1f-ad85-0c617f1c8a67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:34:08.373886  459061 system_pods.go:61] "metrics-server-569cc877fc-mtnc6" [c44ea24f-67b5-4540-8c27-5b0068ac55b1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:34:08.373889  459061 system_pods.go:61] "storage-provisioner" [c42c411b-4206-4686-95c4-c9c279877684] Running
	I0717 19:34:08.373895  459061 system_pods.go:74] duration metric: took 16.671935ms to wait for pod list to return data ...
	I0717 19:34:08.373902  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:34:08.388698  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:34:08.388737  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:34:08.388749  459061 node_conditions.go:105] duration metric: took 14.84302ms to run NodePressure ...
	I0717 19:34:08.388769  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:34:08.750983  459061 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759547  459061 kubeadm.go:739] kubelet initialised
	I0717 19:34:08.759579  459061 kubeadm.go:740] duration metric: took 8.564098ms waiting for restarted kubelet to initialise ...
	I0717 19:34:08.759592  459061 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:34:08.769683  459061 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.780332  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780364  459061 pod_ready.go:81] duration metric: took 10.641436ms for pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.780377  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "coredns-7db6d8ff4d-8brst" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.780387  459061 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.791556  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791590  459061 pod_ready.go:81] duration metric: took 11.19204ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.791605  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "etcd-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.791613  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.801822  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801874  459061 pod_ready.go:81] duration metric: took 10.246706ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.801889  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.801905  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:08.807704  459061 pod_ready.go:97] node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807735  459061 pod_ready.go:81] duration metric: took 5.8166ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	E0717 19:34:08.807747  459061 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-637675" hosting pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-637675" has status "Ready":"False"
	I0717 19:34:08.807755  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161548  459061 pod_ready.go:92] pod "kube-proxy-4cv66" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:09.161587  459061 pod_ready.go:81] duration metric: took 353.822822ms for pod "kube-proxy-4cv66" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:09.161597  459061 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:11.168387  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:07.894730  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:09.895797  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:06.556740  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.056120  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:07.556375  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.055926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.556426  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.056856  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:09.556032  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.056791  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:10.556117  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:11.056198  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:08.694237  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.192662  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:13.168686  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.668585  459061 pod_ready.go:102] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:12.395034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:14.895242  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:11.556103  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.056463  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:12.556709  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.056048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.556926  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.056810  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:14.556793  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.056168  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:15.556716  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:16.056041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:13.194925  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:15.693550  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.668639  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:34:17.668755  459061 pod_ready.go:81] duration metric: took 8.50714283s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:17.668772  459061 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	I0717 19:34:19.678850  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:17.395670  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:19.395898  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.396841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:16.556695  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.056877  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.556620  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.056628  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:18.556552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.056137  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:19.556627  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.056655  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:20.556041  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:21.056058  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:17.694895  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:20.194174  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:22.176132  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.674293  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:23.894981  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.394921  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:21.556663  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.056552  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.556508  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.056623  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:23.556414  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.055964  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:24.556741  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.056721  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:25.556914  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:26.056520  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:22.693472  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:24.693880  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.695637  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.675680  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:29.176560  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:28.896034  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.394391  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:26.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.056754  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:27.555925  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.056226  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:28.556626  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.056219  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.556961  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.056546  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:30.555883  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:31.056398  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:29.195231  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.693669  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.674839  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.676172  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.676669  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:33.394904  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:35.399901  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:31.556766  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.056928  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:32.556232  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.055917  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:33.556864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.056869  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.555951  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.056718  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:35.556230  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:36.056542  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:34.195066  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.692760  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:38.175828  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.676034  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:37.894862  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:40.399004  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:36.556557  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.056940  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:37.556241  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.056369  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.555969  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.056289  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:39.556107  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.055999  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:40.556561  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:41.055882  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:38.693922  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.194229  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.676087  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:44.680245  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:42.898155  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.402470  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:41.556589  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.055932  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:42.556345  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:43.056754  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:43.056873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:43.097168  459741 cri.go:89] found id: ""
	I0717 19:34:43.097214  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.097226  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:43.097234  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:43.097302  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:43.139033  459741 cri.go:89] found id: ""
	I0717 19:34:43.139067  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.139077  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:43.139084  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:43.139138  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:43.179520  459741 cri.go:89] found id: ""
	I0717 19:34:43.179549  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.179558  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:43.179566  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:43.179705  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:43.216014  459741 cri.go:89] found id: ""
	I0717 19:34:43.216044  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.216063  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:43.216071  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:43.216141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:43.250985  459741 cri.go:89] found id: ""
	I0717 19:34:43.251030  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.251038  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:43.251044  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:43.251109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:43.286797  459741 cri.go:89] found id: ""
	I0717 19:34:43.286840  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.286849  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:43.286856  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:43.286919  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:43.321626  459741 cri.go:89] found id: ""
	I0717 19:34:43.321657  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.321665  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:43.321671  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:43.321733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.355415  459741 cri.go:89] found id: ""
	I0717 19:34:43.355444  459741 logs.go:276] 0 containers: []
	W0717 19:34:43.355452  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:43.355462  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:43.355476  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:43.409331  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:43.409369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:43.424013  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:43.424038  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:43.559102  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:43.559132  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:43.559149  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:43.625751  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:43.625791  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.168132  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:46.196943  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:46.197013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:46.254167  459741 cri.go:89] found id: ""
	I0717 19:34:46.254197  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.254205  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:46.254211  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:46.254277  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:46.291018  459741 cri.go:89] found id: ""
	I0717 19:34:46.291052  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.291063  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:46.291072  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:46.291136  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:46.331767  459741 cri.go:89] found id: ""
	I0717 19:34:46.331812  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.331825  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:46.331835  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:46.331918  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:46.373157  459741 cri.go:89] found id: ""
	I0717 19:34:46.373206  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.373218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:46.373226  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:46.373297  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:46.413014  459741 cri.go:89] found id: ""
	I0717 19:34:46.413041  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.413055  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:46.413061  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:46.413114  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:46.456115  459741 cri.go:89] found id: ""
	I0717 19:34:46.456148  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.456159  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:46.456167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:46.456230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:46.492962  459741 cri.go:89] found id: ""
	I0717 19:34:46.493048  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.493063  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:46.493074  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:46.493149  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:43.195298  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:45.695368  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.175268  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:49.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:47.895768  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.395078  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:46.533824  459741 cri.go:89] found id: ""
	I0717 19:34:46.533856  459741 logs.go:276] 0 containers: []
	W0717 19:34:46.533868  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:46.533882  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:46.533899  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:46.614205  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:46.614229  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:46.614242  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:46.689833  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:46.689875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:46.729427  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:46.729463  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:46.779887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:46.779930  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.294846  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:49.308554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:49.308625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:49.343774  459741 cri.go:89] found id: ""
	I0717 19:34:49.343802  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.343810  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:49.343816  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:49.343872  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:49.380698  459741 cri.go:89] found id: ""
	I0717 19:34:49.380729  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.380737  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:49.380744  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:49.380796  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:49.422026  459741 cri.go:89] found id: ""
	I0717 19:34:49.422059  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.422073  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:49.422082  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:49.422147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:49.465793  459741 cri.go:89] found id: ""
	I0717 19:34:49.465837  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.465850  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:49.465859  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:49.465929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:49.503462  459741 cri.go:89] found id: ""
	I0717 19:34:49.503507  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.503519  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:49.503528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:49.503598  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:49.546776  459741 cri.go:89] found id: ""
	I0717 19:34:49.546808  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.546818  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:49.546826  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:49.546895  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:49.589367  459741 cri.go:89] found id: ""
	I0717 19:34:49.589401  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.589412  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:49.589420  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:49.589493  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:49.625497  459741 cri.go:89] found id: ""
	I0717 19:34:49.625532  459741 logs.go:276] 0 containers: []
	W0717 19:34:49.625543  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:49.625557  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:49.625574  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:49.664499  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:49.664536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:49.718160  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:49.718202  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:49.732774  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:49.732807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:49.806951  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:49.806981  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:49.806999  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:48.192967  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:50.193695  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:51.675656  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:54.175342  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:56.176351  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.895953  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.394057  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:52.379790  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:52.393469  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:52.393554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:52.434277  459741 cri.go:89] found id: ""
	I0717 19:34:52.434312  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.434322  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:52.434330  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:52.434388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:52.470378  459741 cri.go:89] found id: ""
	I0717 19:34:52.470413  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.470421  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:52.470428  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:52.470501  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:52.506331  459741 cri.go:89] found id: ""
	I0717 19:34:52.506361  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.506369  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:52.506376  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:52.506431  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:52.547497  459741 cri.go:89] found id: ""
	I0717 19:34:52.547532  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.547540  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:52.547545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:52.547615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:52.584389  459741 cri.go:89] found id: ""
	I0717 19:34:52.584423  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.584434  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:52.584442  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:52.584527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:52.621381  459741 cri.go:89] found id: ""
	I0717 19:34:52.621408  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.621416  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:52.621422  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:52.621472  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:52.661706  459741 cri.go:89] found id: ""
	I0717 19:34:52.661744  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.661756  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:52.661764  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:52.661832  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:52.702736  459741 cri.go:89] found id: ""
	I0717 19:34:52.702763  459741 logs.go:276] 0 containers: []
	W0717 19:34:52.702773  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:52.702784  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:52.702799  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:52.741742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:52.741779  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:52.794377  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:52.794429  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:52.809685  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:52.809717  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:52.884263  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:52.884289  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:52.884305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:55.472342  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:55.486612  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:55.486677  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:55.519486  459741 cri.go:89] found id: ""
	I0717 19:34:55.519514  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.519522  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:55.519528  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:55.519638  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:55.555162  459741 cri.go:89] found id: ""
	I0717 19:34:55.555190  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.555198  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:55.555204  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:55.555259  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:55.591239  459741 cri.go:89] found id: ""
	I0717 19:34:55.591276  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.591288  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:55.591297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:55.591359  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:55.628203  459741 cri.go:89] found id: ""
	I0717 19:34:55.628239  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.628251  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:55.628258  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:55.628347  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:55.664663  459741 cri.go:89] found id: ""
	I0717 19:34:55.664702  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.664715  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:55.664725  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:55.664822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:55.702741  459741 cri.go:89] found id: ""
	I0717 19:34:55.702773  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.702780  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:55.702788  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:55.702862  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:55.745601  459741 cri.go:89] found id: ""
	I0717 19:34:55.745642  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.745653  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:55.745661  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:55.745742  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:55.786699  459741 cri.go:89] found id: ""
	I0717 19:34:55.786727  459741 logs.go:276] 0 containers: []
	W0717 19:34:55.786736  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:55.786746  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:55.786764  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:55.831685  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:55.831722  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:55.885346  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:55.885389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:55.902374  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:55.902407  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:55.974221  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:55.974245  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:55.974259  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:52.693991  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:55.194420  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.676747  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.176131  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:57.894988  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.394486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:34:58.557685  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:34:58.571821  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:34:58.571887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:34:58.606713  459741 cri.go:89] found id: ""
	I0717 19:34:58.606742  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.606751  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:34:58.606757  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:34:58.606831  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:34:58.640693  459741 cri.go:89] found id: ""
	I0717 19:34:58.640728  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.640738  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:34:58.640746  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:34:58.640816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:34:58.675351  459741 cri.go:89] found id: ""
	I0717 19:34:58.675385  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.675396  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:34:58.675403  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:34:58.675470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:34:58.711792  459741 cri.go:89] found id: ""
	I0717 19:34:58.711825  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.711834  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:34:58.711841  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:34:58.711898  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:34:58.751391  459741 cri.go:89] found id: ""
	I0717 19:34:58.751418  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.751427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:34:58.751432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:34:58.751492  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:34:58.789067  459741 cri.go:89] found id: ""
	I0717 19:34:58.789099  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.789109  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:34:58.789116  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:34:58.789193  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:34:58.827415  459741 cri.go:89] found id: ""
	I0717 19:34:58.827453  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.827464  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:34:58.827470  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:34:58.827538  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:34:58.865505  459741 cri.go:89] found id: ""
	I0717 19:34:58.865543  459741 logs.go:276] 0 containers: []
	W0717 19:34:58.865553  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:34:58.865566  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:34:58.865587  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:34:58.921388  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:34:58.921427  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:34:58.935694  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:34:58.935724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:34:59.012534  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:34:59.012561  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:34:59.012598  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:34:59.095950  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:34:59.096045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:34:57.694041  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:00.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.194641  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:03.176199  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:05.176261  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:02.894558  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:04.899436  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:01.640824  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:01.654969  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:01.655062  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:01.700480  459741 cri.go:89] found id: ""
	I0717 19:35:01.700528  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.700540  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:01.700548  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:01.700621  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:01.739274  459741 cri.go:89] found id: ""
	I0717 19:35:01.739309  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.739319  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:01.739327  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:01.739403  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:01.778555  459741 cri.go:89] found id: ""
	I0717 19:35:01.778591  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.778601  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:01.778609  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:01.778676  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:01.819147  459741 cri.go:89] found id: ""
	I0717 19:35:01.819189  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.819204  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:01.819213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:01.819290  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:01.857132  459741 cri.go:89] found id: ""
	I0717 19:35:01.857178  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.857190  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:01.857199  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:01.857274  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:01.895551  459741 cri.go:89] found id: ""
	I0717 19:35:01.895583  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.895593  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:01.895602  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:01.895679  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:01.938146  459741 cri.go:89] found id: ""
	I0717 19:35:01.938185  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.938198  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:01.938206  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:01.938284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:01.974876  459741 cri.go:89] found id: ""
	I0717 19:35:01.974909  459741 logs.go:276] 0 containers: []
	W0717 19:35:01.974919  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:01.974933  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:01.974955  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:02.050651  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:02.050679  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:02.050711  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:02.130149  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:02.130191  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:02.170930  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:02.170961  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:02.226842  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:02.226889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:04.742978  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:04.757649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:04.757714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:04.795487  459741 cri.go:89] found id: ""
	I0717 19:35:04.795517  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.795525  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:04.795531  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:04.795583  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:04.832554  459741 cri.go:89] found id: ""
	I0717 19:35:04.832596  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.832607  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:04.832620  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:04.832678  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:04.867859  459741 cri.go:89] found id: ""
	I0717 19:35:04.867895  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.867904  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:04.867911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:04.867971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:04.905936  459741 cri.go:89] found id: ""
	I0717 19:35:04.905969  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.905978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:04.905985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:04.906064  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:04.943177  459741 cri.go:89] found id: ""
	I0717 19:35:04.943204  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.943213  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:04.943219  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:04.943273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:04.980038  459741 cri.go:89] found id: ""
	I0717 19:35:04.980073  459741 logs.go:276] 0 containers: []
	W0717 19:35:04.980087  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:04.980093  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:04.980154  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:05.020848  459741 cri.go:89] found id: ""
	I0717 19:35:05.020885  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.020896  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:05.020907  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:05.020985  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:05.060505  459741 cri.go:89] found id: ""
	I0717 19:35:05.060543  459741 logs.go:276] 0 containers: []
	W0717 19:35:05.060556  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:05.060592  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:05.060617  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:05.113354  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:05.113400  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:05.128045  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:05.128086  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:05.213923  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:05.214020  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:05.214045  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:05.296526  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:05.296577  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:04.194995  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:06.694576  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.678930  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:10.175252  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.394677  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:09.394932  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.395166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:07.835865  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:07.851503  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:07.851581  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:07.899945  459741 cri.go:89] found id: ""
	I0717 19:35:07.899976  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.899984  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:07.899992  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:07.900066  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:07.938294  459741 cri.go:89] found id: ""
	I0717 19:35:07.938326  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.938335  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:07.938342  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:07.938402  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:07.975274  459741 cri.go:89] found id: ""
	I0717 19:35:07.975309  459741 logs.go:276] 0 containers: []
	W0717 19:35:07.975319  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:07.975327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:07.975401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:08.010818  459741 cri.go:89] found id: ""
	I0717 19:35:08.010864  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.010873  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:08.010880  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:08.010945  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:08.054494  459741 cri.go:89] found id: ""
	I0717 19:35:08.054532  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.054544  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:08.054552  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:08.054651  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:08.096357  459741 cri.go:89] found id: ""
	I0717 19:35:08.096384  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.096393  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:08.096399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:08.096461  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:08.134694  459741 cri.go:89] found id: ""
	I0717 19:35:08.134739  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.134749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:08.134755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:08.134833  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:08.171722  459741 cri.go:89] found id: ""
	I0717 19:35:08.171757  459741 logs.go:276] 0 containers: []
	W0717 19:35:08.171768  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:08.171780  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:08.171797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:08.252441  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:08.252502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:08.298782  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:08.298815  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:08.352934  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:08.352974  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:08.367121  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:08.367158  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:08.445860  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:10.946537  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:10.959955  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:10.960025  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:10.994611  459741 cri.go:89] found id: ""
	I0717 19:35:10.994646  459741 logs.go:276] 0 containers: []
	W0717 19:35:10.994658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:10.994667  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:10.994733  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:11.031997  459741 cri.go:89] found id: ""
	I0717 19:35:11.032027  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.032035  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:11.032041  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:11.032115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:11.073818  459741 cri.go:89] found id: ""
	I0717 19:35:11.073854  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.073865  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:11.073874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:11.073942  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:11.109966  459741 cri.go:89] found id: ""
	I0717 19:35:11.110000  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.110012  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:11.110025  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:11.110100  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:11.146928  459741 cri.go:89] found id: ""
	I0717 19:35:11.146958  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.146980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:11.146988  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:11.147056  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:11.189327  459741 cri.go:89] found id: ""
	I0717 19:35:11.189364  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.189374  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:11.189383  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:11.189457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:11.228587  459741 cri.go:89] found id: ""
	I0717 19:35:11.228628  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.228641  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:11.228650  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:11.228719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:11.267624  459741 cri.go:89] found id: ""
	I0717 19:35:11.267671  459741 logs.go:276] 0 containers: []
	W0717 19:35:11.267685  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:11.267699  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:11.267716  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:11.322589  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:11.322631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:11.338101  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:11.338147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:11.411360  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:11.411387  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:11.411405  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:11.495657  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:11.495701  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:09.194430  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:11.693290  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:12.175345  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.175825  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.177247  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:13.894711  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:15.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:14.037797  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:14.050939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:14.051012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:14.093711  459741 cri.go:89] found id: ""
	I0717 19:35:14.093744  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.093756  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:14.093764  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:14.093837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:14.132139  459741 cri.go:89] found id: ""
	I0717 19:35:14.132168  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.132180  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:14.132188  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:14.132256  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:14.170950  459741 cri.go:89] found id: ""
	I0717 19:35:14.170978  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.170988  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:14.170995  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:14.171073  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:14.211104  459741 cri.go:89] found id: ""
	I0717 19:35:14.211138  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.211148  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:14.211155  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:14.211229  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:14.245921  459741 cri.go:89] found id: ""
	I0717 19:35:14.245961  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.245975  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:14.245985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:14.246053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:14.309477  459741 cri.go:89] found id: ""
	I0717 19:35:14.309509  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.309520  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:14.309529  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:14.309617  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:14.346835  459741 cri.go:89] found id: ""
	I0717 19:35:14.346863  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.346872  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:14.346878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:14.346935  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:14.381258  459741 cri.go:89] found id: ""
	I0717 19:35:14.381289  459741 logs.go:276] 0 containers: []
	W0717 19:35:14.381298  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:14.381307  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:14.381324  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:14.436214  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:14.436262  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:14.452446  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:14.452478  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:14.520238  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:14.520265  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:14.520282  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:14.600444  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:14.600502  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:13.694391  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:16.194147  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.676158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.676984  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:18.394226  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.395263  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:17.144586  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:17.157992  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:17.158084  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:17.195200  459741 cri.go:89] found id: ""
	I0717 19:35:17.195228  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.195238  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:17.195245  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:17.195308  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:17.231846  459741 cri.go:89] found id: ""
	I0717 19:35:17.231892  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.231904  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:17.231913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:17.231974  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:17.268234  459741 cri.go:89] found id: ""
	I0717 19:35:17.268261  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.268269  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:17.268275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:17.268328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:17.308536  459741 cri.go:89] found id: ""
	I0717 19:35:17.308565  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.308574  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:17.308581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:17.308655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:17.344285  459741 cri.go:89] found id: ""
	I0717 19:35:17.344316  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.344325  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:17.344331  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:17.344393  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:17.384384  459741 cri.go:89] found id: ""
	I0717 19:35:17.384416  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.384425  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:17.384431  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:17.384518  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:17.422255  459741 cri.go:89] found id: ""
	I0717 19:35:17.422282  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.422291  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:17.422297  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:17.422349  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:17.459561  459741 cri.go:89] found id: ""
	I0717 19:35:17.459590  459741 logs.go:276] 0 containers: []
	W0717 19:35:17.459599  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:17.459611  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:17.459628  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:17.473472  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:17.473510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:17.544929  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:17.544962  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:17.544979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:17.627230  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:17.627275  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:17.680586  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:17.680622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.234582  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:20.248215  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:20.248282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:20.286124  459741 cri.go:89] found id: ""
	I0717 19:35:20.286159  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.286171  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:20.286180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:20.286251  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:20.323885  459741 cri.go:89] found id: ""
	I0717 19:35:20.323925  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.323938  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:20.323945  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:20.324013  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:20.363968  459741 cri.go:89] found id: ""
	I0717 19:35:20.364011  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.364025  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:20.364034  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:20.364108  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:20.404100  459741 cri.go:89] found id: ""
	I0717 19:35:20.404127  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.404136  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:20.404142  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:20.404212  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:20.442339  459741 cri.go:89] found id: ""
	I0717 19:35:20.442372  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.442383  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:20.442391  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:20.442462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:20.480461  459741 cri.go:89] found id: ""
	I0717 19:35:20.480505  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.480517  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:20.480526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:20.480618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:20.516072  459741 cri.go:89] found id: ""
	I0717 19:35:20.516104  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.516114  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:20.516119  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:20.516171  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:20.552294  459741 cri.go:89] found id: ""
	I0717 19:35:20.552333  459741 logs.go:276] 0 containers: []
	W0717 19:35:20.552345  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:20.552359  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:20.552377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:20.607025  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:20.607067  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:20.624323  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:20.624363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:20.716528  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:20.716550  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:20.716567  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:20.797015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:20.797059  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:18.693667  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:20.694367  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.175240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.175374  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:22.893704  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:24.893940  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:23.345063  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:23.358664  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:23.358781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.395399  459741 cri.go:89] found id: ""
	I0717 19:35:23.395429  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.395436  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:23.395441  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:23.395498  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:23.434827  459741 cri.go:89] found id: ""
	I0717 19:35:23.434866  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.434880  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:23.434889  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:23.434960  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:23.470884  459741 cri.go:89] found id: ""
	I0717 19:35:23.470915  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.470931  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:23.470937  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:23.470989  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:23.508532  459741 cri.go:89] found id: ""
	I0717 19:35:23.508566  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.508575  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:23.508581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:23.508636  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:23.543803  459741 cri.go:89] found id: ""
	I0717 19:35:23.543840  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.543856  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:23.543865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:23.543938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:23.578897  459741 cri.go:89] found id: ""
	I0717 19:35:23.578942  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.578953  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:23.578962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:23.579028  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:23.617967  459741 cri.go:89] found id: ""
	I0717 19:35:23.618003  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.618013  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:23.618021  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:23.618092  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:23.660780  459741 cri.go:89] found id: ""
	I0717 19:35:23.660818  459741 logs.go:276] 0 containers: []
	W0717 19:35:23.660830  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:23.660845  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:23.660862  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:23.745248  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:23.745305  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:23.784355  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:23.784392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:23.838152  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:23.838199  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:23.853017  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:23.853046  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:23.932674  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.433476  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:26.457953  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:26.458030  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:23.192304  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:25.193087  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:27.176102  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.677887  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.895714  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:29.398017  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:26.515559  459741 cri.go:89] found id: ""
	I0717 19:35:26.515589  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.515598  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:26.515605  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:26.515668  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:26.555092  459741 cri.go:89] found id: ""
	I0717 19:35:26.555123  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.555134  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:26.555142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:26.555208  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:26.591291  459741 cri.go:89] found id: ""
	I0717 19:35:26.591335  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.591348  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:26.591357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:26.591429  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:26.628941  459741 cri.go:89] found id: ""
	I0717 19:35:26.628970  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.628978  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:26.628985  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:26.629050  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:26.668355  459741 cri.go:89] found id: ""
	I0717 19:35:26.668386  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.668394  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:26.668399  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:26.668457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:26.711810  459741 cri.go:89] found id: ""
	I0717 19:35:26.711846  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.711857  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:26.711865  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:26.711937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:26.751674  459741 cri.go:89] found id: ""
	I0717 19:35:26.751708  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.751719  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:26.751726  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:26.751781  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:26.792690  459741 cri.go:89] found id: ""
	I0717 19:35:26.792784  459741 logs.go:276] 0 containers: []
	W0717 19:35:26.792803  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:26.792816  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:26.792847  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:26.846466  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:26.846503  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:26.861467  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:26.861500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:26.934219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:26.934244  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:26.934260  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:27.017150  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:27.017197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:29.569360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:29.584040  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:29.584112  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:29.619704  459741 cri.go:89] found id: ""
	I0717 19:35:29.619738  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.619750  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:29.619756  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:29.619824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:29.655983  459741 cri.go:89] found id: ""
	I0717 19:35:29.656018  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.656030  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:29.656037  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:29.656103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:29.694056  459741 cri.go:89] found id: ""
	I0717 19:35:29.694088  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.694098  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:29.694107  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:29.694165  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:29.731955  459741 cri.go:89] found id: ""
	I0717 19:35:29.732047  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.732066  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:29.732075  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:29.732142  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:29.765921  459741 cri.go:89] found id: ""
	I0717 19:35:29.765952  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.765961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:29.765967  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:29.766022  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:29.798699  459741 cri.go:89] found id: ""
	I0717 19:35:29.798728  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.798736  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:29.798742  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:29.798804  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:29.832551  459741 cri.go:89] found id: ""
	I0717 19:35:29.832580  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.832587  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:29.832593  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:29.832652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:29.867985  459741 cri.go:89] found id: ""
	I0717 19:35:29.868022  459741 logs.go:276] 0 containers: []
	W0717 19:35:29.868033  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:29.868046  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:29.868071  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:29.941724  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:29.941746  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:29.941760  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:30.025462  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:30.025506  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:30.066732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:30.066768  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:30.117389  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:30.117434  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:27.694070  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:30.193593  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.194062  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.175354  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:34.675049  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:31.894626  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:33.897661  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.394620  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:32.632779  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:32.648751  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:32.648828  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:32.686145  459741 cri.go:89] found id: ""
	I0717 19:35:32.686174  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.686182  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:32.686190  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:32.686242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:32.721924  459741 cri.go:89] found id: ""
	I0717 19:35:32.721956  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.721967  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:32.721974  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:32.722042  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:32.760815  459741 cri.go:89] found id: ""
	I0717 19:35:32.760851  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.760862  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:32.760869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:32.760939  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:32.797740  459741 cri.go:89] found id: ""
	I0717 19:35:32.797779  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.797792  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:32.797801  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:32.797878  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:32.833914  459741 cri.go:89] found id: ""
	I0717 19:35:32.833947  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.833955  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:32.833962  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:32.834020  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:32.870265  459741 cri.go:89] found id: ""
	I0717 19:35:32.870297  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.870306  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:32.870319  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:32.870388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:32.911340  459741 cri.go:89] found id: ""
	I0717 19:35:32.911380  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.911391  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:32.911402  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:32.911470  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:32.947932  459741 cri.go:89] found id: ""
	I0717 19:35:32.947967  459741 logs.go:276] 0 containers: []
	W0717 19:35:32.947978  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:32.947990  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:32.948008  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:33.016473  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:33.016513  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:33.016527  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:33.096741  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:33.096783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:33.137686  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:33.137723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:33.194110  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:33.194157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:35.710074  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:35.723799  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:35.723880  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:35.759473  459741 cri.go:89] found id: ""
	I0717 19:35:35.759515  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.759526  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:35.759535  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:35.759606  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:35.796764  459741 cri.go:89] found id: ""
	I0717 19:35:35.796799  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.796809  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:35.796817  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:35.796892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:35.831345  459741 cri.go:89] found id: ""
	I0717 19:35:35.831375  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.831386  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:35.831394  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:35.831463  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:35.869885  459741 cri.go:89] found id: ""
	I0717 19:35:35.869920  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.869931  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:35.869939  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:35.870009  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:35.908812  459741 cri.go:89] found id: ""
	I0717 19:35:35.908840  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.908849  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:35.908855  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:35.908909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:35.946227  459741 cri.go:89] found id: ""
	I0717 19:35:35.946285  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.946297  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:35.946305  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:35.946387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:35.983534  459741 cri.go:89] found id: ""
	I0717 19:35:35.983577  459741 logs.go:276] 0 containers: []
	W0717 19:35:35.983592  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:35.983601  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:35.983670  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:36.019516  459741 cri.go:89] found id: ""
	I0717 19:35:36.019552  459741 logs.go:276] 0 containers: []
	W0717 19:35:36.019564  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:36.019578  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:36.019597  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:36.070887  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:36.070931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:36.087054  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:36.087092  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:36.163759  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:36.163795  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:36.163809  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:36.249968  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:36.250012  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:34.693272  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.693505  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:36.675472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.677852  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.679662  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.895397  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.394394  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:38.799616  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:38.813094  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:38.813161  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:38.848696  459741 cri.go:89] found id: ""
	I0717 19:35:38.848731  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.848745  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:38.848754  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:38.848836  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:38.885898  459741 cri.go:89] found id: ""
	I0717 19:35:38.885932  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.885943  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:38.885950  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:38.886016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:38.925499  459741 cri.go:89] found id: ""
	I0717 19:35:38.925531  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.925543  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:38.925550  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:38.925615  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:38.961176  459741 cri.go:89] found id: ""
	I0717 19:35:38.961209  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.961218  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:38.961225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:38.961279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:38.998940  459741 cri.go:89] found id: ""
	I0717 19:35:38.998971  459741 logs.go:276] 0 containers: []
	W0717 19:35:38.998980  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:38.998986  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:38.999040  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:39.034934  459741 cri.go:89] found id: ""
	I0717 19:35:39.034966  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.034973  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:39.034980  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:39.035034  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:39.070278  459741 cri.go:89] found id: ""
	I0717 19:35:39.070309  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.070319  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:39.070327  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:39.070413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:39.106302  459741 cri.go:89] found id: ""
	I0717 19:35:39.106337  459741 logs.go:276] 0 containers: []
	W0717 19:35:39.106348  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:39.106361  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:39.106379  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:39.145656  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:39.145685  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:39.198998  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:39.199042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:39.215383  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:39.215416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:39.284244  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:39.284270  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:39.284286  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:38.693865  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:40.694855  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.176915  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.676854  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:43.394736  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.395188  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:41.864335  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:41.878557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:41.878645  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:41.919806  459741 cri.go:89] found id: ""
	I0717 19:35:41.919843  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.919856  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:41.919865  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:41.919938  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:41.956113  459741 cri.go:89] found id: ""
	I0717 19:35:41.956144  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.956154  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:41.956161  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:41.956230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:41.996211  459741 cri.go:89] found id: ""
	I0717 19:35:41.996256  459741 logs.go:276] 0 containers: []
	W0717 19:35:41.996266  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:41.996274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:41.996341  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:42.030800  459741 cri.go:89] found id: ""
	I0717 19:35:42.030829  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.030840  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:42.030847  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:42.030922  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:42.065307  459741 cri.go:89] found id: ""
	I0717 19:35:42.065347  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.065358  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:42.065368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:42.065440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:42.103574  459741 cri.go:89] found id: ""
	I0717 19:35:42.103609  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.103621  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:42.103628  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:42.103693  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:42.141146  459741 cri.go:89] found id: ""
	I0717 19:35:42.141181  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.141320  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:42.141337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:42.141418  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:42.179958  459741 cri.go:89] found id: ""
	I0717 19:35:42.179986  459741 logs.go:276] 0 containers: []
	W0717 19:35:42.179994  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:42.180004  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:42.180017  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:42.194911  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:42.194947  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:42.267709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:42.267750  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:42.267772  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:42.347258  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:42.347302  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:42.393595  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:42.393631  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:44.946043  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:44.958994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:44.959086  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:44.997687  459741 cri.go:89] found id: ""
	I0717 19:35:44.997724  459741 logs.go:276] 0 containers: []
	W0717 19:35:44.997735  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:44.997743  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:44.997814  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:45.038023  459741 cri.go:89] found id: ""
	I0717 19:35:45.038060  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.038070  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:45.038079  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:45.038141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:45.073529  459741 cri.go:89] found id: ""
	I0717 19:35:45.073562  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.073573  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:45.073581  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:45.073644  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:45.109831  459741 cri.go:89] found id: ""
	I0717 19:35:45.109863  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.109871  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:45.109878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:45.109933  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:45.147828  459741 cri.go:89] found id: ""
	I0717 19:35:45.147867  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.147891  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:45.147899  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:45.147986  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:45.184729  459741 cri.go:89] found id: ""
	I0717 19:35:45.184765  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.184777  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:45.184784  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:45.184846  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:45.223895  459741 cri.go:89] found id: ""
	I0717 19:35:45.223940  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.223950  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:45.223956  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:45.224016  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:45.263391  459741 cri.go:89] found id: ""
	I0717 19:35:45.263421  459741 logs.go:276] 0 containers: []
	W0717 19:35:45.263430  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:45.263440  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:45.263457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:45.316323  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:45.316369  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:45.331447  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:45.331491  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:45.413226  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:45.413259  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:45.413277  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:45.498680  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:45.498738  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:43.193210  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:45.693264  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.175929  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.176109  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:47.893486  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:49.894666  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:48.043162  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:48.057081  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:48.057146  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:48.096607  459741 cri.go:89] found id: ""
	I0717 19:35:48.096636  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.096644  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:48.096650  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:48.096710  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:48.132865  459741 cri.go:89] found id: ""
	I0717 19:35:48.132895  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.132906  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:48.132913  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:48.132979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:48.168060  459741 cri.go:89] found id: ""
	I0717 19:35:48.168090  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.168102  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:48.168109  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:48.168177  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:48.203993  459741 cri.go:89] found id: ""
	I0717 19:35:48.204023  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.204033  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:48.204041  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:48.204102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:48.240321  459741 cri.go:89] found id: ""
	I0717 19:35:48.240353  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.240364  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:48.240371  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:48.240440  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:48.281103  459741 cri.go:89] found id: ""
	I0717 19:35:48.281147  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.281158  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:48.281167  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:48.281233  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:48.316002  459741 cri.go:89] found id: ""
	I0717 19:35:48.316034  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.316043  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:48.316049  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:48.316102  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:48.355370  459741 cri.go:89] found id: ""
	I0717 19:35:48.355399  459741 logs.go:276] 0 containers: []
	W0717 19:35:48.355409  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:48.355421  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:48.355456  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:48.372448  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:48.372496  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:48.443867  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:48.443901  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:48.443919  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:48.519762  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:48.519807  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:48.562263  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:48.562297  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.112016  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:51.125350  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:51.125421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:51.162053  459741 cri.go:89] found id: ""
	I0717 19:35:51.162090  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.162101  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:51.162111  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:51.162182  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:51.201853  459741 cri.go:89] found id: ""
	I0717 19:35:51.201924  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.201937  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:51.201944  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:51.202021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:51.241675  459741 cri.go:89] found id: ""
	I0717 19:35:51.241709  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.241720  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:51.241729  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:51.241798  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:51.279332  459741 cri.go:89] found id: ""
	I0717 19:35:51.279369  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.279380  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:51.279388  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:51.279443  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:51.316375  459741 cri.go:89] found id: ""
	I0717 19:35:51.316413  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.316424  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:51.316432  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:51.316531  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:51.353300  459741 cri.go:89] found id: ""
	I0717 19:35:51.353337  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.353347  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:51.353355  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:51.353424  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:51.390413  459741 cri.go:89] found id: ""
	I0717 19:35:51.390441  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.390449  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:51.390457  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:51.390523  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:51.428040  459741 cri.go:89] found id: ""
	I0717 19:35:51.428077  459741 logs.go:276] 0 containers: []
	W0717 19:35:51.428089  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:51.428103  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:51.428145  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:51.481743  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:51.481792  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:51.498226  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:51.498261  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:35:48.194645  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:50.194741  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:52.676762  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.177549  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:51.895688  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:54.394821  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	W0717 19:35:51.579871  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:51.579895  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:51.579909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:51.659448  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:51.659490  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.201712  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:54.215688  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:54.215766  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:54.253448  459741 cri.go:89] found id: ""
	I0717 19:35:54.253479  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.253487  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:54.253493  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:54.253547  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:54.288135  459741 cri.go:89] found id: ""
	I0717 19:35:54.288176  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.288187  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:54.288194  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:54.288292  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:54.324798  459741 cri.go:89] found id: ""
	I0717 19:35:54.324845  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.324855  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:54.324864  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:54.324936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:54.363909  459741 cri.go:89] found id: ""
	I0717 19:35:54.363943  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.363955  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:54.363964  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:54.364039  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:54.401221  459741 cri.go:89] found id: ""
	I0717 19:35:54.401248  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.401259  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:54.401267  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:54.401335  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:54.439258  459741 cri.go:89] found id: ""
	I0717 19:35:54.439285  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.439293  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:54.439299  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:54.439352  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:54.473321  459741 cri.go:89] found id: ""
	I0717 19:35:54.473358  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.473373  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:54.473379  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:54.473432  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:54.519107  459741 cri.go:89] found id: ""
	I0717 19:35:54.519141  459741 logs.go:276] 0 containers: []
	W0717 19:35:54.519152  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:54.519167  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:54.519184  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:35:54.562666  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:54.562710  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:54.614711  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:54.614756  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:54.630953  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:54.630986  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:54.706639  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:54.706666  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:54.706684  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:52.694467  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:55.193366  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.179574  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.675883  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:56.895166  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:59.396238  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:35:57.289180  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:35:57.302364  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:35:57.302447  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:35:57.344401  459741 cri.go:89] found id: ""
	I0717 19:35:57.344437  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.344450  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:35:57.344459  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:35:57.344551  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:35:57.384095  459741 cri.go:89] found id: ""
	I0717 19:35:57.384126  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.384135  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:35:57.384142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:35:57.384209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:35:57.422789  459741 cri.go:89] found id: ""
	I0717 19:35:57.422825  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.422836  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:35:57.422844  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:35:57.422914  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:35:57.460943  459741 cri.go:89] found id: ""
	I0717 19:35:57.460970  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.460979  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:35:57.460984  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:35:57.461035  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:35:57.495168  459741 cri.go:89] found id: ""
	I0717 19:35:57.495197  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.495204  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:35:57.495211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:35:57.495267  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:35:57.529611  459741 cri.go:89] found id: ""
	I0717 19:35:57.529641  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.529649  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:35:57.529656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:35:57.529719  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:35:57.565502  459741 cri.go:89] found id: ""
	I0717 19:35:57.565535  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.565544  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:35:57.565549  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:35:57.565610  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:35:57.601058  459741 cri.go:89] found id: ""
	I0717 19:35:57.601093  459741 logs.go:276] 0 containers: []
	W0717 19:35:57.601107  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:35:57.601121  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:35:57.601139  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:35:57.651408  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:35:57.651450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:35:57.665696  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:35:57.665734  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:35:57.739259  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.739301  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:35:57.739335  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:35:57.818085  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:35:57.818128  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.358441  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:00.371840  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:00.371904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:00.411607  459741 cri.go:89] found id: ""
	I0717 19:36:00.411639  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.411647  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:00.411653  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:00.411717  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:00.448879  459741 cri.go:89] found id: ""
	I0717 19:36:00.448917  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.448929  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:00.448938  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:00.449006  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:00.489637  459741 cri.go:89] found id: ""
	I0717 19:36:00.489683  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.489695  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:00.489705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:00.489773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:00.528172  459741 cri.go:89] found id: ""
	I0717 19:36:00.528206  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.528215  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:00.528221  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:00.528284  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:00.564857  459741 cri.go:89] found id: ""
	I0717 19:36:00.564891  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.564903  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:00.564911  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:00.564979  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:00.601226  459741 cri.go:89] found id: ""
	I0717 19:36:00.601257  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.601269  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:00.601277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:00.601342  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:00.641481  459741 cri.go:89] found id: ""
	I0717 19:36:00.641515  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.641526  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:00.641533  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:00.641609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:00.678564  459741 cri.go:89] found id: ""
	I0717 19:36:00.678590  459741 logs.go:276] 0 containers: []
	W0717 19:36:00.678598  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:00.678608  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:00.678622  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:00.763613  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:00.763657  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:00.804763  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:00.804797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:00.856648  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:00.856686  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:00.870767  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:00.870797  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:00.949952  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:35:57.694827  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:00.193607  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:02.194404  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.676020  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.676246  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:05.676400  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:01.894566  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:04.394473  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.395396  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:03.450461  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:03.465429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:03.465500  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:03.504346  459741 cri.go:89] found id: ""
	I0717 19:36:03.504377  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.504387  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:03.504393  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:03.504457  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:03.546643  459741 cri.go:89] found id: ""
	I0717 19:36:03.546671  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.546678  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:03.546685  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:03.546741  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:03.587389  459741 cri.go:89] found id: ""
	I0717 19:36:03.587423  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.587435  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:03.587443  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:03.587506  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:03.621968  459741 cri.go:89] found id: ""
	I0717 19:36:03.622002  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.622014  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:03.622023  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:03.622095  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:03.655934  459741 cri.go:89] found id: ""
	I0717 19:36:03.655967  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.655976  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:03.655982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:03.656051  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:03.690464  459741 cri.go:89] found id: ""
	I0717 19:36:03.690493  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.690503  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:03.690511  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:03.690575  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:03.727030  459741 cri.go:89] found id: ""
	I0717 19:36:03.727068  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.727080  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:03.727088  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:03.727158  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:03.760858  459741 cri.go:89] found id: ""
	I0717 19:36:03.760898  459741 logs.go:276] 0 containers: []
	W0717 19:36:03.760907  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:03.760917  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:03.760931  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:03.774333  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:03.774366  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:03.849228  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:03.849255  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:03.849273  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:03.930165  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:03.930203  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:03.971833  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:03.971875  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:04.693899  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.192840  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:07.678006  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.176147  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:08.395699  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:10.894333  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:06.525723  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:06.539410  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:06.539502  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:06.580112  459741 cri.go:89] found id: ""
	I0717 19:36:06.580152  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.580173  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:06.580181  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:06.580272  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:06.622098  459741 cri.go:89] found id: ""
	I0717 19:36:06.622128  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.622136  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:06.622142  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:06.622209  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:06.669930  459741 cri.go:89] found id: ""
	I0717 19:36:06.669962  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.669973  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:06.669982  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:06.670048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:06.717072  459741 cri.go:89] found id: ""
	I0717 19:36:06.717111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.717124  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:06.717132  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:06.717207  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:06.756637  459741 cri.go:89] found id: ""
	I0717 19:36:06.756672  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.756680  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:06.756694  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:06.756756  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:06.804359  459741 cri.go:89] found id: ""
	I0717 19:36:06.804388  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.804397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:06.804404  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:06.804468  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:06.856082  459741 cri.go:89] found id: ""
	I0717 19:36:06.856111  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.856120  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:06.856125  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:06.856180  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:06.898141  459741 cri.go:89] found id: ""
	I0717 19:36:06.898170  459741 logs.go:276] 0 containers: []
	W0717 19:36:06.898180  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:06.898191  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:06.898209  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:06.975635  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:06.975660  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:06.975676  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:07.055695  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:07.055741  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:07.096041  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:07.096077  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:07.146523  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:07.146570  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.661906  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:09.676994  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:09.677078  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:09.716287  459741 cri.go:89] found id: ""
	I0717 19:36:09.716315  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.716328  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:09.716337  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:09.716405  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:09.759489  459741 cri.go:89] found id: ""
	I0717 19:36:09.759521  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.759532  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:09.759541  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:09.759601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:09.799604  459741 cri.go:89] found id: ""
	I0717 19:36:09.799634  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.799643  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:09.799649  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:09.799709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:09.839542  459741 cri.go:89] found id: ""
	I0717 19:36:09.839572  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.839581  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:09.839588  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:09.839666  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:09.879061  459741 cri.go:89] found id: ""
	I0717 19:36:09.879098  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.879110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:09.879118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:09.879184  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:09.920903  459741 cri.go:89] found id: ""
	I0717 19:36:09.920931  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.920939  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:09.920946  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:09.921002  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:09.956362  459741 cri.go:89] found id: ""
	I0717 19:36:09.956391  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.956411  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:09.956429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:09.956508  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:09.992817  459741 cri.go:89] found id: ""
	I0717 19:36:09.992849  459741 logs.go:276] 0 containers: []
	W0717 19:36:09.992859  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:09.992872  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:09.992889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:10.060594  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:10.060620  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:10.060660  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:10.141840  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:10.141895  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:10.182850  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:10.182889  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:10.238946  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:10.238993  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:09.194101  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:11.693468  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.675987  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.176665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.894710  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:15.394738  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:12.753796  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:12.766740  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:12.766816  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:12.799307  459741 cri.go:89] found id: ""
	I0717 19:36:12.799341  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.799351  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:12.799362  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:12.799439  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:12.838345  459741 cri.go:89] found id: ""
	I0717 19:36:12.838395  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.838408  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:12.838416  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:12.838482  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:12.876780  459741 cri.go:89] found id: ""
	I0717 19:36:12.876807  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.876816  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:12.876822  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:12.876907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:12.913222  459741 cri.go:89] found id: ""
	I0717 19:36:12.913253  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.913263  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:12.913271  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:12.913334  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:12.948210  459741 cri.go:89] found id: ""
	I0717 19:36:12.948245  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.948255  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:12.948263  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:12.948328  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:12.980746  459741 cri.go:89] found id: ""
	I0717 19:36:12.980782  459741 logs.go:276] 0 containers: []
	W0717 19:36:12.980794  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:12.980806  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:12.980871  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:13.015655  459741 cri.go:89] found id: ""
	I0717 19:36:13.015694  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.015707  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:13.015715  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:13.015773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:13.050570  459741 cri.go:89] found id: ""
	I0717 19:36:13.050609  459741 logs.go:276] 0 containers: []
	W0717 19:36:13.050617  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:13.050627  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:13.050642  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:13.101031  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:13.101072  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:13.115206  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:13.115239  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:13.190949  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:13.190979  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:13.190994  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:13.267467  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:13.267508  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:15.808237  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:15.822498  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:15.822570  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:15.860509  459741 cri.go:89] found id: ""
	I0717 19:36:15.860545  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.860556  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:15.860564  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:15.860630  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:15.895608  459741 cri.go:89] found id: ""
	I0717 19:36:15.895655  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.895666  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:15.895674  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:15.895738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:15.936113  459741 cri.go:89] found id: ""
	I0717 19:36:15.936148  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.936159  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:15.936168  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:15.936254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:15.973146  459741 cri.go:89] found id: ""
	I0717 19:36:15.973186  459741 logs.go:276] 0 containers: []
	W0717 19:36:15.973198  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:15.973207  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:15.973273  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:16.006122  459741 cri.go:89] found id: ""
	I0717 19:36:16.006164  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.006175  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:16.006183  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:16.006255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:16.044352  459741 cri.go:89] found id: ""
	I0717 19:36:16.044385  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.044397  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:16.044406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:16.044476  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:16.081573  459741 cri.go:89] found id: ""
	I0717 19:36:16.081614  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.081625  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:16.081637  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:16.081707  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:16.120444  459741 cri.go:89] found id: ""
	I0717 19:36:16.120480  459741 logs.go:276] 0 containers: []
	W0717 19:36:16.120506  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:16.120520  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:16.120536  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:16.171563  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:16.171601  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:16.185534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:16.185564  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:16.258627  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:16.258657  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:16.258672  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:16.341345  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:16.341390  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:14.193370  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:16.693933  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.680240  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.681457  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:17.894353  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:19.894879  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:18.883092  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:18.897931  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:18.898015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:18.932054  459741 cri.go:89] found id: ""
	I0717 19:36:18.932085  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.932096  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:18.932104  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:18.932162  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:18.966450  459741 cri.go:89] found id: ""
	I0717 19:36:18.966478  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.966490  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:18.966498  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:18.966561  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:18.999881  459741 cri.go:89] found id: ""
	I0717 19:36:18.999909  459741 logs.go:276] 0 containers: []
	W0717 19:36:18.999920  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:18.999927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:18.999984  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:19.036701  459741 cri.go:89] found id: ""
	I0717 19:36:19.036730  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.036746  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:19.036753  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:19.036824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:19.073488  459741 cri.go:89] found id: ""
	I0717 19:36:19.073515  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.073523  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:19.073528  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:19.073582  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:19.109128  459741 cri.go:89] found id: ""
	I0717 19:36:19.109161  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.109171  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:19.109179  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:19.109249  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:19.148452  459741 cri.go:89] found id: ""
	I0717 19:36:19.148494  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.148509  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:19.148518  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:19.148595  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:19.184056  459741 cri.go:89] found id: ""
	I0717 19:36:19.184086  459741 logs.go:276] 0 containers: []
	W0717 19:36:19.184097  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:19.184112  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:19.184129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:19.198518  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:19.198553  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:19.273176  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:19.273198  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:19.273213  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:19.347999  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:19.348042  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:19.390847  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:19.390890  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:19.194436  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.693020  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.176414  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.676290  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:22.395588  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:24.894771  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:21.946700  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:21.960590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:21.960655  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:21.994632  459741 cri.go:89] found id: ""
	I0717 19:36:21.994662  459741 logs.go:276] 0 containers: []
	W0717 19:36:21.994670  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:21.994677  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:21.994738  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:22.029390  459741 cri.go:89] found id: ""
	I0717 19:36:22.029419  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.029428  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:22.029434  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:22.029484  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:22.065632  459741 cri.go:89] found id: ""
	I0717 19:36:22.065668  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.065679  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:22.065687  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:22.065792  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:22.100893  459741 cri.go:89] found id: ""
	I0717 19:36:22.100931  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.100942  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:22.100950  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:22.101007  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:22.137064  459741 cri.go:89] found id: ""
	I0717 19:36:22.137099  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.137110  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:22.137118  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:22.137187  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:22.176027  459741 cri.go:89] found id: ""
	I0717 19:36:22.176061  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.176071  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:22.176080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:22.176147  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:22.211035  459741 cri.go:89] found id: ""
	I0717 19:36:22.211060  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.211068  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:22.211076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:22.211129  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:22.246541  459741 cri.go:89] found id: ""
	I0717 19:36:22.246577  459741 logs.go:276] 0 containers: []
	W0717 19:36:22.246589  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:22.246617  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:22.246635  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:22.288154  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:22.288198  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:22.342243  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:22.342295  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:22.356125  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:22.356157  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:22.427767  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:22.427793  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:22.427806  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.011986  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:25.026057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:25.026134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:25.060744  459741 cri.go:89] found id: ""
	I0717 19:36:25.060778  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.060788  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:25.060794  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:25.060857  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:25.094760  459741 cri.go:89] found id: ""
	I0717 19:36:25.094799  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.094810  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:25.094818  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:25.094884  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:25.129937  459741 cri.go:89] found id: ""
	I0717 19:36:25.129980  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.129990  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:25.129996  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:25.130053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:25.162886  459741 cri.go:89] found id: ""
	I0717 19:36:25.162914  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.162922  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:25.162927  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:25.162994  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:25.199261  459741 cri.go:89] found id: ""
	I0717 19:36:25.199290  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.199312  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:25.199329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:25.199388  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:25.236454  459741 cri.go:89] found id: ""
	I0717 19:36:25.236494  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.236506  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:25.236514  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:25.236569  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:25.272257  459741 cri.go:89] found id: ""
	I0717 19:36:25.272293  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.272304  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:25.272312  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:25.272381  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:25.308442  459741 cri.go:89] found id: ""
	I0717 19:36:25.308478  459741 logs.go:276] 0 containers: []
	W0717 19:36:25.308504  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:25.308517  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:25.308534  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:25.362269  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:25.362321  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:25.376994  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:25.377026  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:25.450219  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:25.450242  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:25.450256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:25.537123  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:25.537161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:23.693457  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.192763  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.677228  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.175390  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.176353  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:26.895481  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:29.393635  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.395374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:28.077415  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:28.093047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:28.093126  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:28.128129  459741 cri.go:89] found id: ""
	I0717 19:36:28.128158  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.128166  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:28.128180  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:28.128234  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:28.170796  459741 cri.go:89] found id: ""
	I0717 19:36:28.170834  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.170845  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:28.170853  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:28.170924  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:28.208250  459741 cri.go:89] found id: ""
	I0717 19:36:28.208278  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.208287  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:28.208304  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:28.208385  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:28.251511  459741 cri.go:89] found id: ""
	I0717 19:36:28.251547  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.251567  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:28.251575  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:28.251648  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:28.286597  459741 cri.go:89] found id: ""
	I0717 19:36:28.286633  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.286643  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:28.286651  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:28.286715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:28.323089  459741 cri.go:89] found id: ""
	I0717 19:36:28.323119  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.323127  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:28.323133  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:28.323192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:28.357941  459741 cri.go:89] found id: ""
	I0717 19:36:28.357972  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.357980  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:28.357987  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:28.358053  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:28.393141  459741 cri.go:89] found id: ""
	I0717 19:36:28.393171  459741 logs.go:276] 0 containers: []
	W0717 19:36:28.393182  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:28.393192  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:28.393208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:28.446992  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:28.447031  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:28.460386  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:28.460416  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:28.524640  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:28.524671  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:28.524694  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.605322  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:28.605363  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.145909  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:31.159567  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:31.159686  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:31.196086  459741 cri.go:89] found id: ""
	I0717 19:36:31.196113  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.196125  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:31.196134  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:31.196186  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:31.238076  459741 cri.go:89] found id: ""
	I0717 19:36:31.238104  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.238111  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:31.238117  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:31.238172  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:31.274360  459741 cri.go:89] found id: ""
	I0717 19:36:31.274391  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.274400  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:31.274406  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:31.274462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:31.308845  459741 cri.go:89] found id: ""
	I0717 19:36:31.308871  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.308880  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:31.308886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:31.308946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:31.344978  459741 cri.go:89] found id: ""
	I0717 19:36:31.345010  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.345021  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:31.345028  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:31.345094  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:31.381741  459741 cri.go:89] found id: ""
	I0717 19:36:31.381767  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.381775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:31.381783  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:31.381837  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:31.417522  459741 cri.go:89] found id: ""
	I0717 19:36:31.417554  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.417563  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:31.417571  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:31.417635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:31.451121  459741 cri.go:89] found id: ""
	I0717 19:36:31.451152  459741 logs.go:276] 0 containers: []
	W0717 19:36:31.451165  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:31.451177  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:31.451195  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:28.195048  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:30.693260  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.676171  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.676215  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:33.894329  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:36.394573  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:31.542015  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:31.542063  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:31.583418  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:31.583449  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:31.635807  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:31.635845  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:31.649144  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:31.649172  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:31.728539  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.229124  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:34.242482  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:34.242554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:34.276554  459741 cri.go:89] found id: ""
	I0717 19:36:34.276602  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.276610  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:34.276616  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:34.276671  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:34.314766  459741 cri.go:89] found id: ""
	I0717 19:36:34.314799  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.314807  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:34.314813  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:34.314874  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:34.352765  459741 cri.go:89] found id: ""
	I0717 19:36:34.352798  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.352809  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:34.352817  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:34.352886  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:34.386519  459741 cri.go:89] found id: ""
	I0717 19:36:34.386556  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.386564  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:34.386570  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:34.386669  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:34.423789  459741 cri.go:89] found id: ""
	I0717 19:36:34.423820  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.423829  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:34.423838  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:34.423911  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:34.458849  459741 cri.go:89] found id: ""
	I0717 19:36:34.458883  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.458895  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:34.458903  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:34.458969  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:34.494653  459741 cri.go:89] found id: ""
	I0717 19:36:34.494686  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.494697  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:34.494705  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:34.494770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:34.529386  459741 cri.go:89] found id: ""
	I0717 19:36:34.529423  459741 logs.go:276] 0 containers: []
	W0717 19:36:34.529431  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:34.529441  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:34.529455  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:34.582161  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:34.582204  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:34.596699  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:34.596732  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:34.673468  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:34.673501  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:34.673519  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:34.751134  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:34.751180  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:33.193313  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:35.193610  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.178018  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.676860  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:38.395038  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.396311  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:37.290429  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:37.304307  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:37.304391  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:37.338790  459741 cri.go:89] found id: ""
	I0717 19:36:37.338818  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.338827  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:37.338833  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:37.338903  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:37.376923  459741 cri.go:89] found id: ""
	I0717 19:36:37.376953  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.376961  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:37.376966  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:37.377017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:37.415988  459741 cri.go:89] found id: ""
	I0717 19:36:37.416016  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.416024  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:37.416029  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:37.416083  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:37.449398  459741 cri.go:89] found id: ""
	I0717 19:36:37.449435  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.449447  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:37.449459  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:37.449532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:37.489489  459741 cri.go:89] found id: ""
	I0717 19:36:37.489525  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.489535  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:37.489544  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:37.489609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:37.528055  459741 cri.go:89] found id: ""
	I0717 19:36:37.528092  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.528103  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:37.528112  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:37.528174  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:37.564295  459741 cri.go:89] found id: ""
	I0717 19:36:37.564332  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.564344  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:37.564352  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:37.564421  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:37.597909  459741 cri.go:89] found id: ""
	I0717 19:36:37.597949  459741 logs.go:276] 0 containers: []
	W0717 19:36:37.597960  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:37.597976  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:37.598002  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:37.652104  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:37.652147  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:37.668341  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:37.668374  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:37.746663  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:37.746693  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:37.746706  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:37.822210  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:37.822250  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:40.370417  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:40.385795  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:40.385873  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:40.431821  459741 cri.go:89] found id: ""
	I0717 19:36:40.431861  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.431873  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:40.431881  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:40.431952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:40.468302  459741 cri.go:89] found id: ""
	I0717 19:36:40.468334  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.468346  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:40.468354  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:40.468409  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:40.503678  459741 cri.go:89] found id: ""
	I0717 19:36:40.503709  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.503727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:40.503733  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:40.503785  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:40.540732  459741 cri.go:89] found id: ""
	I0717 19:36:40.540763  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.540772  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:40.540778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:40.540843  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:40.589546  459741 cri.go:89] found id: ""
	I0717 19:36:40.589574  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.589583  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:40.589590  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:40.589642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:40.625314  459741 cri.go:89] found id: ""
	I0717 19:36:40.625350  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.625359  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:40.625368  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:40.625435  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:40.663946  459741 cri.go:89] found id: ""
	I0717 19:36:40.663974  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.663982  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:40.663990  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:40.664048  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:40.701681  459741 cri.go:89] found id: ""
	I0717 19:36:40.701712  459741 logs.go:276] 0 containers: []
	W0717 19:36:40.701722  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:40.701732  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:40.701747  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:40.762876  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:40.762913  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:40.777993  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:40.778039  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:40.854973  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:40.854996  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:40.855015  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:40.935075  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:40.935114  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:37.693613  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:40.192783  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.193024  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.176326  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.675745  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:42.895180  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:45.396439  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:43.476048  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:43.490580  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:43.490652  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:43.525613  459741 cri.go:89] found id: ""
	I0717 19:36:43.525649  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.525658  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:43.525665  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:43.525722  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:43.564102  459741 cri.go:89] found id: ""
	I0717 19:36:43.564147  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.564158  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:43.564166  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:43.564230  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:43.603290  459741 cri.go:89] found id: ""
	I0717 19:36:43.603316  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.603323  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:43.603329  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:43.603387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:43.638001  459741 cri.go:89] found id: ""
	I0717 19:36:43.638031  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.638038  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:43.638056  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:43.638134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:43.672992  459741 cri.go:89] found id: ""
	I0717 19:36:43.673026  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.673037  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:43.673045  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:43.673115  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:43.713130  459741 cri.go:89] found id: ""
	I0717 19:36:43.713165  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.713176  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:43.713188  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:43.713255  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:43.747637  459741 cri.go:89] found id: ""
	I0717 19:36:43.747685  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.747694  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:43.747702  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:43.747771  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:43.784425  459741 cri.go:89] found id: ""
	I0717 19:36:43.784460  459741 logs.go:276] 0 containers: []
	W0717 19:36:43.784471  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:43.784492  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:43.784510  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:43.798454  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:43.798483  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:43.875753  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:43.875776  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:43.875793  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:43.957009  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:43.957052  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:44.001089  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:44.001122  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:44.193299  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.193520  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.679212  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:50.176924  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:47.894374  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:49.898348  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:46.554298  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:46.568658  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:46.568730  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:46.604721  459741 cri.go:89] found id: ""
	I0717 19:36:46.604750  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.604759  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:46.604765  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:46.604815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:46.644164  459741 cri.go:89] found id: ""
	I0717 19:36:46.644196  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.644209  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:46.644217  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:46.644288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:46.683657  459741 cri.go:89] found id: ""
	I0717 19:36:46.683695  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.683702  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:46.683708  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:46.683773  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:46.720967  459741 cri.go:89] found id: ""
	I0717 19:36:46.720995  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.721003  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:46.721008  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:46.721059  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:46.755825  459741 cri.go:89] found id: ""
	I0717 19:36:46.755854  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.755866  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:46.755876  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:46.755946  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:46.797091  459741 cri.go:89] found id: ""
	I0717 19:36:46.797130  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.797138  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:46.797145  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:46.797201  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:46.838053  459741 cri.go:89] found id: ""
	I0717 19:36:46.838090  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.838100  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:46.838108  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:46.838176  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:46.881516  459741 cri.go:89] found id: ""
	I0717 19:36:46.881549  459741 logs.go:276] 0 containers: []
	W0717 19:36:46.881558  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:46.881567  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:46.881582  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:46.952407  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:46.952434  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:46.952457  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:47.043739  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:47.043787  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:47.083335  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:47.083367  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:47.138212  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:47.138256  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:49.656394  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:49.670755  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:49.670830  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:49.709177  459741 cri.go:89] found id: ""
	I0717 19:36:49.709208  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.709217  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:49.709222  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:49.709286  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:49.745905  459741 cri.go:89] found id: ""
	I0717 19:36:49.745940  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.745952  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:49.745960  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:49.746038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:49.779073  459741 cri.go:89] found id: ""
	I0717 19:36:49.779106  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.779117  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:49.779124  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:49.779190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:49.815459  459741 cri.go:89] found id: ""
	I0717 19:36:49.815504  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.815516  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:49.815525  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:49.815635  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:49.854714  459741 cri.go:89] found id: ""
	I0717 19:36:49.854751  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.854760  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:49.854766  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:49.854821  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:49.897717  459741 cri.go:89] found id: ""
	I0717 19:36:49.897742  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.897752  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:49.897760  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:49.897824  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:49.933388  459741 cri.go:89] found id: ""
	I0717 19:36:49.933419  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.933429  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:49.933437  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:49.933527  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:49.971955  459741 cri.go:89] found id: ""
	I0717 19:36:49.971988  459741 logs.go:276] 0 containers: []
	W0717 19:36:49.971999  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:49.972011  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:49.972029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:50.025761  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:50.025801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:50.039771  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:50.039801  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:50.111349  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:50.111374  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:50.111388  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:50.193972  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:50.194004  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:48.693842  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:51.192837  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.177150  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.675862  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.394841  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:54.395035  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.395227  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:52.733468  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:52.749052  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:52.749119  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:52.785364  459741 cri.go:89] found id: ""
	I0717 19:36:52.785392  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.785400  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:52.785407  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:52.785462  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:52.824177  459741 cri.go:89] found id: ""
	I0717 19:36:52.824211  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.824219  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:52.824225  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:52.824298  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:52.860781  459741 cri.go:89] found id: ""
	I0717 19:36:52.860812  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.860823  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:52.860831  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:52.860904  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:52.903963  459741 cri.go:89] found id: ""
	I0717 19:36:52.903995  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.904006  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:52.904014  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:52.904080  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:52.944920  459741 cri.go:89] found id: ""
	I0717 19:36:52.944950  459741 logs.go:276] 0 containers: []
	W0717 19:36:52.944961  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:52.944968  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:52.945033  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:53.007409  459741 cri.go:89] found id: ""
	I0717 19:36:53.007438  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.007449  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:53.007456  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:53.007526  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:53.048160  459741 cri.go:89] found id: ""
	I0717 19:36:53.048193  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.048205  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:53.048213  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:53.048285  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:53.083493  459741 cri.go:89] found id: ""
	I0717 19:36:53.083522  459741 logs.go:276] 0 containers: []
	W0717 19:36:53.083534  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:53.083546  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:53.083563  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.139380  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:53.139425  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:53.154005  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:53.154107  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:53.230123  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:53.230146  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:53.230160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:53.307183  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:53.307228  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:55.849344  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:55.863554  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:55.863625  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:55.899317  459741 cri.go:89] found id: ""
	I0717 19:36:55.899347  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.899358  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:55.899365  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:55.899433  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:55.934725  459741 cri.go:89] found id: ""
	I0717 19:36:55.934760  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.934771  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:55.934779  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:55.934854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:55.967721  459741 cri.go:89] found id: ""
	I0717 19:36:55.967751  459741 logs.go:276] 0 containers: []
	W0717 19:36:55.967760  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:55.967768  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:55.967835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:56.001163  459741 cri.go:89] found id: ""
	I0717 19:36:56.001193  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.001203  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:56.001211  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:56.001309  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:56.040863  459741 cri.go:89] found id: ""
	I0717 19:36:56.040898  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.040910  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:56.040918  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:56.040990  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:56.075045  459741 cri.go:89] found id: ""
	I0717 19:36:56.075075  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.075083  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:56.075090  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:56.075141  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:56.115641  459741 cri.go:89] found id: ""
	I0717 19:36:56.115673  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.115683  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:56.115692  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:56.115757  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:56.154952  459741 cri.go:89] found id: ""
	I0717 19:36:56.154989  459741 logs.go:276] 0 containers: []
	W0717 19:36:56.155000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:56.155012  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:56.155029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:56.168624  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:56.168655  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:56.241129  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:56.241149  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:56.241161  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:56.326577  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:56.326627  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:56.370835  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:56.370896  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:53.194230  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:55.693021  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:56.677604  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.177845  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.395814  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:00.894894  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:58.923483  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:36:58.936869  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:36:58.936971  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:36:58.970975  459741 cri.go:89] found id: ""
	I0717 19:36:58.971015  459741 logs.go:276] 0 containers: []
	W0717 19:36:58.971026  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:36:58.971036  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:36:58.971103  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:36:59.004902  459741 cri.go:89] found id: ""
	I0717 19:36:59.004936  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.004945  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:36:59.004953  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:36:59.005021  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:36:59.049595  459741 cri.go:89] found id: ""
	I0717 19:36:59.049627  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.049635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:36:59.049642  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:36:59.049694  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:36:59.084143  459741 cri.go:89] found id: ""
	I0717 19:36:59.084175  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.084185  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:36:59.084192  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:36:59.084244  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:36:59.121362  459741 cri.go:89] found id: ""
	I0717 19:36:59.121397  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.121408  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:36:59.121416  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:36:59.121486  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:36:59.158791  459741 cri.go:89] found id: ""
	I0717 19:36:59.158823  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.158832  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:36:59.158839  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:36:59.158907  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:36:59.196785  459741 cri.go:89] found id: ""
	I0717 19:36:59.196814  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.196825  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:36:59.196832  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:36:59.196928  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:36:59.233526  459741 cri.go:89] found id: ""
	I0717 19:36:59.233585  459741 logs.go:276] 0 containers: []
	W0717 19:36:59.233602  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:36:59.233615  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:36:59.233633  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:36:59.287586  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:36:59.287629  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:36:59.303060  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:36:59.303109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:36:59.380105  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:36:59.380141  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:36:59.380160  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:36:59.457673  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:36:59.457723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:36:57.693064  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:36:59.696137  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.194529  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.676676  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.174546  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.176591  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:02.895007  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:04.896128  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:01.999397  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:02.013638  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:02.013769  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:02.053831  459741 cri.go:89] found id: ""
	I0717 19:37:02.053860  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.053869  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:02.053875  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:02.053929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:02.095600  459741 cri.go:89] found id: ""
	I0717 19:37:02.095634  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.095644  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:02.095650  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:02.095703  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:02.134219  459741 cri.go:89] found id: ""
	I0717 19:37:02.134253  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.134267  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:02.134277  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:02.134351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:02.172985  459741 cri.go:89] found id: ""
	I0717 19:37:02.173017  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.173029  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:02.173037  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:02.173109  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:02.210465  459741 cri.go:89] found id: ""
	I0717 19:37:02.210492  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.210500  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:02.210506  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:02.210562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:02.246736  459741 cri.go:89] found id: ""
	I0717 19:37:02.246767  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.246775  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:02.246781  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:02.246834  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:02.285131  459741 cri.go:89] found id: ""
	I0717 19:37:02.285166  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.285177  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:02.285185  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:02.285254  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:02.323199  459741 cri.go:89] found id: ""
	I0717 19:37:02.323232  459741 logs.go:276] 0 containers: []
	W0717 19:37:02.323241  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:02.323252  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:02.323266  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:02.337356  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:02.337392  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:02.411669  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:02.411706  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:02.411724  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:02.488543  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:02.488590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:02.531147  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:02.531189  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.085888  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:05.099059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:05.099134  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:05.140745  459741 cri.go:89] found id: ""
	I0717 19:37:05.140771  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.140783  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:05.140791  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:05.140859  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:05.175634  459741 cri.go:89] found id: ""
	I0717 19:37:05.175669  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.175679  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:05.175687  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:05.175761  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:05.213114  459741 cri.go:89] found id: ""
	I0717 19:37:05.213148  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.213157  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:05.213171  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:05.213242  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:05.249756  459741 cri.go:89] found id: ""
	I0717 19:37:05.249791  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.249803  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:05.249811  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:05.249882  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:05.285601  459741 cri.go:89] found id: ""
	I0717 19:37:05.285634  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.285645  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:05.285654  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:05.285729  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:05.325523  459741 cri.go:89] found id: ""
	I0717 19:37:05.325557  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.325566  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:05.325573  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:05.325641  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:05.364250  459741 cri.go:89] found id: ""
	I0717 19:37:05.364284  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.364295  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:05.364303  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:05.364377  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:05.399924  459741 cri.go:89] found id: ""
	I0717 19:37:05.399951  459741 logs.go:276] 0 containers: []
	W0717 19:37:05.399958  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:05.399967  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:05.399979  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:05.456770  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:05.456821  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:05.472041  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:05.472073  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:05.539653  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:05.539685  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:05.539703  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:05.628977  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:05.629023  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:04.693176  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.693594  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.677525  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.175472  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:06.897414  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:09.394322  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.395513  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:08.181585  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:08.195153  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:08.195225  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:08.234624  459741 cri.go:89] found id: ""
	I0717 19:37:08.234662  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.234674  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:08.234682  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:08.234739  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:08.273034  459741 cri.go:89] found id: ""
	I0717 19:37:08.273069  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.273081  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:08.273089  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:08.273157  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:08.310695  459741 cri.go:89] found id: ""
	I0717 19:37:08.310728  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.310740  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:08.310749  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:08.310815  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:08.346891  459741 cri.go:89] found id: ""
	I0717 19:37:08.346925  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.346936  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:08.346944  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:08.347015  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:08.384830  459741 cri.go:89] found id: ""
	I0717 19:37:08.384863  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.384872  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:08.384878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:08.384948  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:08.423939  459741 cri.go:89] found id: ""
	I0717 19:37:08.423973  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.423983  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:08.423991  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:08.424046  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:08.460822  459741 cri.go:89] found id: ""
	I0717 19:37:08.460854  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.460863  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:08.460874  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:08.460929  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:08.497122  459741 cri.go:89] found id: ""
	I0717 19:37:08.497152  459741 logs.go:276] 0 containers: []
	W0717 19:37:08.497164  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:08.497182  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:08.497197  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:08.549130  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:08.549179  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:08.566072  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:08.566109  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:08.637602  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:08.637629  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:08.637647  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:08.729025  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:08.729078  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:11.270696  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:11.285472  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:11.285554  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:11.319587  459741 cri.go:89] found id: ""
	I0717 19:37:11.319629  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.319638  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:11.319646  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:11.319712  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:11.353044  459741 cri.go:89] found id: ""
	I0717 19:37:11.353077  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.353087  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:11.353093  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:11.353189  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:11.389515  459741 cri.go:89] found id: ""
	I0717 19:37:11.389545  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.389557  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:11.389565  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:11.389634  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:11.430599  459741 cri.go:89] found id: ""
	I0717 19:37:11.430632  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.430640  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:11.430646  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:11.430714  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:11.472171  459741 cri.go:89] found id: ""
	I0717 19:37:11.472207  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.472217  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:11.472223  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:11.472295  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:09.193245  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.695407  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.176224  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:15.179677  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:13.895579  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.394706  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:11.510599  459741 cri.go:89] found id: ""
	I0717 19:37:11.510672  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.510689  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:11.510706  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:11.510779  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:11.550914  459741 cri.go:89] found id: ""
	I0717 19:37:11.550946  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.550954  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:11.550960  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:11.551017  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:11.591129  459741 cri.go:89] found id: ""
	I0717 19:37:11.591205  459741 logs.go:276] 0 containers: []
	W0717 19:37:11.591219  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:11.591233  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:11.591252  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:11.646229  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:11.646265  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:11.661204  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:11.661243  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:11.742396  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:11.742426  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:11.742442  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:11.824647  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:11.824687  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.364360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:14.381022  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:14.381101  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:14.419922  459741 cri.go:89] found id: ""
	I0717 19:37:14.419960  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.419971  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:14.419977  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:14.420032  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:14.459256  459741 cri.go:89] found id: ""
	I0717 19:37:14.459288  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.459296  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:14.459317  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:14.459387  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:14.494487  459741 cri.go:89] found id: ""
	I0717 19:37:14.494517  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.494528  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:14.494535  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:14.494609  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:14.528878  459741 cri.go:89] found id: ""
	I0717 19:37:14.528919  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.528928  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:14.528934  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:14.528999  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:14.564401  459741 cri.go:89] found id: ""
	I0717 19:37:14.564439  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.564451  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:14.564460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:14.564548  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:14.604641  459741 cri.go:89] found id: ""
	I0717 19:37:14.604682  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.604694  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:14.604703  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:14.604770  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:14.638128  459741 cri.go:89] found id: ""
	I0717 19:37:14.638159  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.638168  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:14.638175  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:14.638245  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:14.679475  459741 cri.go:89] found id: ""
	I0717 19:37:14.679508  459741 logs.go:276] 0 containers: []
	W0717 19:37:14.679518  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:14.679529  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:14.679545  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:14.733829  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:14.733871  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:14.748878  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:14.748910  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:14.821043  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:14.821073  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:14.821089  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:14.905137  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:14.905178  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:14.193577  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:16.193939  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.181158  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:19.675868  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:18.894678  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:20.895683  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:17.445221  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:17.459152  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:17.459221  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:17.498175  459741 cri.go:89] found id: ""
	I0717 19:37:17.498204  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.498216  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:17.498226  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:17.498287  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:17.534460  459741 cri.go:89] found id: ""
	I0717 19:37:17.534498  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.534506  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:17.534512  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:17.534571  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:17.571998  459741 cri.go:89] found id: ""
	I0717 19:37:17.572030  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.572040  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:17.572047  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:17.572110  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:17.611184  459741 cri.go:89] found id: ""
	I0717 19:37:17.611215  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.611224  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:17.611231  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:17.611282  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:17.656227  459741 cri.go:89] found id: ""
	I0717 19:37:17.656275  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.656287  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:17.656295  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:17.656361  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:17.695693  459741 cri.go:89] found id: ""
	I0717 19:37:17.695727  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.695746  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:17.695763  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:17.695835  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:17.734017  459741 cri.go:89] found id: ""
	I0717 19:37:17.734043  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.734052  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:17.734057  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:17.734123  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:17.771539  459741 cri.go:89] found id: ""
	I0717 19:37:17.771575  459741 logs.go:276] 0 containers: []
	W0717 19:37:17.771586  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:17.771597  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:17.771611  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:17.811742  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:17.811783  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:17.861865  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:17.861909  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:17.876221  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:17.876255  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:17.957239  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:17.957262  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:17.957278  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:20.539123  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:20.554464  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:20.554546  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:20.591656  459741 cri.go:89] found id: ""
	I0717 19:37:20.591697  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.591706  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:20.591716  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:20.591775  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:20.629470  459741 cri.go:89] found id: ""
	I0717 19:37:20.629504  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.629513  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:20.629519  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:20.629587  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:20.670022  459741 cri.go:89] found id: ""
	I0717 19:37:20.670090  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.670108  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:20.670120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:20.670199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:20.711820  459741 cri.go:89] found id: ""
	I0717 19:37:20.711858  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.711869  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:20.711878  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:20.711952  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:20.746305  459741 cri.go:89] found id: ""
	I0717 19:37:20.746339  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.746349  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:20.746356  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:20.746423  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:20.782218  459741 cri.go:89] found id: ""
	I0717 19:37:20.782255  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.782266  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:20.782275  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:20.782351  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:20.818704  459741 cri.go:89] found id: ""
	I0717 19:37:20.818740  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.818749  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:20.818757  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:20.818820  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:20.853662  459741 cri.go:89] found id: ""
	I0717 19:37:20.853693  459741 logs.go:276] 0 containers: []
	W0717 19:37:20.853701  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:20.853710  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:20.853723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:20.896351  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:20.896377  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:20.948402  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:20.948450  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:20.962807  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:20.962840  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:21.057005  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:21.057036  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:21.057055  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:18.693664  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.192940  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:21.676124  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:24.175970  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.395791  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.894186  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:23.634596  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:23.648460  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:23.648555  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:23.687289  459741 cri.go:89] found id: ""
	I0717 19:37:23.687320  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.687331  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:23.687341  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:23.687407  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:23.725794  459741 cri.go:89] found id: ""
	I0717 19:37:23.725826  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.725847  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:23.725855  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:23.725916  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:23.761575  459741 cri.go:89] found id: ""
	I0717 19:37:23.761624  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.761635  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:23.761643  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:23.761709  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:23.800061  459741 cri.go:89] found id: ""
	I0717 19:37:23.800098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.800111  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:23.800120  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:23.800190  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:23.836067  459741 cri.go:89] found id: ""
	I0717 19:37:23.836098  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.836107  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:23.836113  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:23.836170  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:23.875151  459741 cri.go:89] found id: ""
	I0717 19:37:23.875179  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.875192  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:23.875200  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:23.875268  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:23.913641  459741 cri.go:89] found id: ""
	I0717 19:37:23.913675  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.913685  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:23.913693  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:23.913759  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:23.950362  459741 cri.go:89] found id: ""
	I0717 19:37:23.950391  459741 logs.go:276] 0 containers: []
	W0717 19:37:23.950400  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:23.950410  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:23.950426  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:24.000879  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:24.000924  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:24.014874  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:24.014912  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:24.086589  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:24.086624  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:24.086639  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:24.163160  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:24.163208  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:23.194522  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:25.694306  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.675299  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:28.675607  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:31.176216  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:27.895077  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:29.895208  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:26.705781  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:26.720471  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:26.720562  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:26.776895  459741 cri.go:89] found id: ""
	I0717 19:37:26.776927  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.776936  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:26.776945  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:26.777038  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:26.812191  459741 cri.go:89] found id: ""
	I0717 19:37:26.812219  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.812228  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:26.812234  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:26.812288  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:26.851142  459741 cri.go:89] found id: ""
	I0717 19:37:26.851174  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.851183  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:26.851189  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:26.851243  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:26.887218  459741 cri.go:89] found id: ""
	I0717 19:37:26.887254  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.887266  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:26.887274  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:26.887364  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:26.924197  459741 cri.go:89] found id: ""
	I0717 19:37:26.924226  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.924234  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:26.924240  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:26.924293  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:26.964475  459741 cri.go:89] found id: ""
	I0717 19:37:26.964528  459741 logs.go:276] 0 containers: []
	W0717 19:37:26.964538  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:26.964545  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:26.964618  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:27.001951  459741 cri.go:89] found id: ""
	I0717 19:37:27.002001  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.002010  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:27.002017  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:27.002068  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:27.037062  459741 cri.go:89] found id: ""
	I0717 19:37:27.037094  459741 logs.go:276] 0 containers: []
	W0717 19:37:27.037108  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:27.037122  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:27.037140  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:27.090343  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:27.090389  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:27.104534  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:27.104579  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:27.179957  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:27.179982  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:27.179995  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:27.260358  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:27.260399  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:29.806487  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:29.821519  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:29.821584  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:29.856293  459741 cri.go:89] found id: ""
	I0717 19:37:29.856328  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.856338  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:29.856347  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:29.856413  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:29.893174  459741 cri.go:89] found id: ""
	I0717 19:37:29.893210  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.893220  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:29.893229  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:29.893294  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:29.928264  459741 cri.go:89] found id: ""
	I0717 19:37:29.928298  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.928309  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:29.928316  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:29.928386  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:29.963399  459741 cri.go:89] found id: ""
	I0717 19:37:29.963441  459741 logs.go:276] 0 containers: []
	W0717 19:37:29.963453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:29.963461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:29.963532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:30.001835  459741 cri.go:89] found id: ""
	I0717 19:37:30.001868  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.001878  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:30.001886  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:30.001953  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:30.039476  459741 cri.go:89] found id: ""
	I0717 19:37:30.039507  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.039516  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:30.039526  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:30.039601  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:30.076051  459741 cri.go:89] found id: ""
	I0717 19:37:30.076089  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.076101  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:30.076121  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:30.076198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:30.110959  459741 cri.go:89] found id: ""
	I0717 19:37:30.110988  459741 logs.go:276] 0 containers: []
	W0717 19:37:30.111000  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:30.111013  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:30.111029  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:30.195062  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:30.195101  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:30.235830  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:30.235872  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:30.291057  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:30.291098  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:30.306510  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:30.306543  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:30.382689  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:28.193720  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:30.693187  459147 pod_ready.go:102] pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.193323  459147 pod_ready.go:81] duration metric: took 4m0.007067784s for pod "metrics-server-78fcd8795b-q2jgb" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:32.193346  459147 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 19:37:32.193354  459147 pod_ready.go:38] duration metric: took 4m5.556690666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:32.193373  459147 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:37:32.193409  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.193469  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.245735  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:32.245775  459147 cri.go:89] found id: ""
	I0717 19:37:32.245785  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:32.245865  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.250669  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.250736  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.291837  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:32.291863  459147 cri.go:89] found id: ""
	I0717 19:37:32.291873  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:32.291944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.296739  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.296806  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:32.335823  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.335854  459147 cri.go:89] found id: ""
	I0717 19:37:32.335873  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:32.335944  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.341789  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:32.341875  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:32.382106  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.382128  459147 cri.go:89] found id: ""
	I0717 19:37:32.382136  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:32.382183  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.386399  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:32.386453  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:32.426319  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.426348  459147 cri.go:89] found id: ""
	I0717 19:37:32.426358  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:32.426415  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.431280  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:32.431363  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.176404  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:35.177851  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.397457  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:34.894702  459447 pod_ready.go:102] pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:32.883437  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:32.898085  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:32.898159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:32.933782  459741 cri.go:89] found id: ""
	I0717 19:37:32.933813  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.933823  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:32.933842  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:32.933909  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:32.973843  459741 cri.go:89] found id: ""
	I0717 19:37:32.973871  459741 logs.go:276] 0 containers: []
	W0717 19:37:32.973879  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:32.973885  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:32.973936  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:33.010691  459741 cri.go:89] found id: ""
	I0717 19:37:33.010718  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.010727  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:33.010732  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:33.010791  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:33.051223  459741 cri.go:89] found id: ""
	I0717 19:37:33.051258  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.051269  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:33.051276  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:33.051345  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:33.091182  459741 cri.go:89] found id: ""
	I0717 19:37:33.091212  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.091220  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:33.091225  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:33.091279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:33.128755  459741 cri.go:89] found id: ""
	I0717 19:37:33.128791  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.128804  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:33.128820  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:33.128887  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:33.171834  459741 cri.go:89] found id: ""
	I0717 19:37:33.171871  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.171883  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:33.171890  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:33.171956  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:33.230954  459741 cri.go:89] found id: ""
	I0717 19:37:33.230982  459741 logs.go:276] 0 containers: []
	W0717 19:37:33.230990  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:33.231001  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.231013  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:33.325437  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:33.325483  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:33.325500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.418548  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.418590  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.467574  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.467614  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:33.521312  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.521346  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.037360  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.051209  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.051279  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.088849  459741 cri.go:89] found id: ""
	I0717 19:37:36.088897  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.088909  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:36.088916  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.088973  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.124070  459741 cri.go:89] found id: ""
	I0717 19:37:36.124106  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.124118  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:36.124125  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.124199  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.159373  459741 cri.go:89] found id: ""
	I0717 19:37:36.159402  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.159410  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:36.159415  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.159467  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.197269  459741 cri.go:89] found id: ""
	I0717 19:37:36.197294  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.197302  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:36.197337  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.197389  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.231024  459741 cri.go:89] found id: ""
	I0717 19:37:36.231060  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.231072  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:36.231080  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.231152  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.265388  459741 cri.go:89] found id: ""
	I0717 19:37:36.265414  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.265422  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:36.265429  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.265477  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.301738  459741 cri.go:89] found id: ""
	I0717 19:37:36.301774  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.301786  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.301794  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:36.301892  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:36.340042  459741 cri.go:89] found id: ""
	I0717 19:37:36.340072  459741 logs.go:276] 0 containers: []
	W0717 19:37:36.340080  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:36.340091  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:36.340113  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:36.389928  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:36.389962  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.442668  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.442698  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.458862  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.458908  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:32.470477  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.470505  459147 cri.go:89] found id: ""
	I0717 19:37:32.470514  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:32.470579  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.474790  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:32.474845  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:32.511020  459147 cri.go:89] found id: ""
	I0717 19:37:32.511060  459147 logs.go:276] 0 containers: []
	W0717 19:37:32.511075  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:32.511083  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:32.511148  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:32.550662  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:32.550694  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:32.550700  459147 cri.go:89] found id: ""
	I0717 19:37:32.550710  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:32.550779  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.555544  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:32.559818  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:32.559845  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:32.599011  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:32.599044  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:32.639034  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:32.639072  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:32.680456  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:32.680497  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:32.735881  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:32.735919  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:33.295876  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:33.295927  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:33.453164  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:33.453204  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:33.469665  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:33.469696  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:33.518388  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:33.518425  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:33.580637  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:33.580683  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:33.618544  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:33.618584  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:33.656083  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:33.656127  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:33.703083  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:33.703133  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:36.261037  459147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:36.278701  459147 api_server.go:72] duration metric: took 4m12.907019507s to wait for apiserver process to appear ...
	I0717 19:37:36.278734  459147 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:37:36.278780  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:36.278843  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:36.320128  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:36.320158  459147 cri.go:89] found id: ""
	I0717 19:37:36.320169  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:36.320231  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.325077  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:36.325145  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:36.375930  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:36.375956  459147 cri.go:89] found id: ""
	I0717 19:37:36.375965  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:36.376022  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.381348  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:36.381428  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:36.425613  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:36.425642  459147 cri.go:89] found id: ""
	I0717 19:37:36.425653  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:36.425718  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.430743  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:36.430809  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:36.473039  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:36.473071  459147 cri.go:89] found id: ""
	I0717 19:37:36.473082  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:36.473144  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.477553  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:36.477632  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:36.519042  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:36.519066  459147 cri.go:89] found id: ""
	I0717 19:37:36.519088  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:36.519168  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.523986  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:36.524052  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:36.565547  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.565574  459147 cri.go:89] found id: ""
	I0717 19:37:36.565583  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:36.565636  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.570755  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:36.570832  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:36.608157  459147 cri.go:89] found id: ""
	I0717 19:37:36.608185  459147 logs.go:276] 0 containers: []
	W0717 19:37:36.608194  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:36.608201  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:36.608258  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:36.652807  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:36.652828  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.652832  459147 cri.go:89] found id: ""
	I0717 19:37:36.652839  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:36.652899  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.657815  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:36.663187  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:36.663219  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:36.681970  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:36.682006  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:36.797996  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:36.798041  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:36.862257  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:36.862300  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:36.900711  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.900752  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:37.384370  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:37.384415  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.676589  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:40.177720  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:36.888133  459447 pod_ready.go:81] duration metric: took 4m0.000157346s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" ...
	E0717 19:37:36.888161  459447 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-7rl9d" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:37:36.888179  459447 pod_ready.go:38] duration metric: took 4m7.552581235s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:37:36.888210  459447 kubeadm.go:597] duration metric: took 4m17.06862666s to restartPrimaryControlPlane
	W0717 19:37:36.888317  459447 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:36.888368  459447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	W0717 19:37:36.537169  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:36.537199  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:36.537216  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.120374  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:39.138989  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:39.139065  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:39.198086  459741 cri.go:89] found id: ""
	I0717 19:37:39.198113  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.198121  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:39.198128  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:39.198192  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:39.249660  459741 cri.go:89] found id: ""
	I0717 19:37:39.249707  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.249718  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:39.249725  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:39.249802  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:39.296042  459741 cri.go:89] found id: ""
	I0717 19:37:39.296079  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.296105  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:39.296115  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:39.296198  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:39.335401  459741 cri.go:89] found id: ""
	I0717 19:37:39.335441  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.335453  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:39.335461  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:39.335532  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:39.379343  459741 cri.go:89] found id: ""
	I0717 19:37:39.379389  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.379401  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:39.379409  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:39.379478  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:39.417450  459741 cri.go:89] found id: ""
	I0717 19:37:39.417478  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.417486  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:39.417493  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:39.417556  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:39.453778  459741 cri.go:89] found id: ""
	I0717 19:37:39.453821  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.453835  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:39.453843  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:39.453937  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:39.490619  459741 cri.go:89] found id: ""
	I0717 19:37:39.490654  459741 logs.go:276] 0 containers: []
	W0717 19:37:39.490666  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:39.490678  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:39.490695  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:39.552266  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:39.552304  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:39.567973  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:39.568018  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:39.659709  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:39.659740  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:39.659757  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:39.752017  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:39.752064  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:37.438269  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:37.438314  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:37.491298  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:37.491338  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:37.544646  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:37.544686  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:37.608191  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:37.608229  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:37.652477  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:37.652526  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:37.693416  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:37.693460  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:37.740997  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:37.741045  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.285764  459147 api_server.go:253] Checking apiserver healthz at https://192.168.61.66:8443/healthz ...
	I0717 19:37:40.292091  459147 api_server.go:279] https://192.168.61.66:8443/healthz returned 200:
	ok
	I0717 19:37:40.293337  459147 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:37:40.293368  459147 api_server.go:131] duration metric: took 4.014624748s to wait for apiserver health ...
	I0717 19:37:40.293379  459147 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:37:40.293412  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:40.293485  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:40.334754  459147 cri.go:89] found id: "94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:40.334783  459147 cri.go:89] found id: ""
	I0717 19:37:40.334794  459147 logs.go:276] 1 containers: [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5]
	I0717 19:37:40.334855  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.338862  459147 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:40.338932  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:40.379320  459147 cri.go:89] found id: "ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:40.379350  459147 cri.go:89] found id: ""
	I0717 19:37:40.379361  459147 logs.go:276] 1 containers: [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0]
	I0717 19:37:40.379424  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.384351  459147 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:40.384426  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:40.423393  459147 cri.go:89] found id: "9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:40.423421  459147 cri.go:89] found id: ""
	I0717 19:37:40.423432  459147 logs.go:276] 1 containers: [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002]
	I0717 19:37:40.423496  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.429541  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:40.429622  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:40.476723  459147 cri.go:89] found id: "5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:40.476752  459147 cri.go:89] found id: ""
	I0717 19:37:40.476762  459147 logs.go:276] 1 containers: [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df]
	I0717 19:37:40.476822  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.483324  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:40.483407  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:40.530062  459147 cri.go:89] found id: "ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:40.530090  459147 cri.go:89] found id: ""
	I0717 19:37:40.530100  459147 logs.go:276] 1 containers: [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77]
	I0717 19:37:40.530160  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.535894  459147 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:40.535980  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:40.574966  459147 cri.go:89] found id: "e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:40.575000  459147 cri.go:89] found id: ""
	I0717 19:37:40.575011  459147 logs.go:276] 1 containers: [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5]
	I0717 19:37:40.575082  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.579633  459147 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:40.579709  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:40.617093  459147 cri.go:89] found id: ""
	I0717 19:37:40.617131  459147 logs.go:276] 0 containers: []
	W0717 19:37:40.617143  459147 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:40.617151  459147 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 19:37:40.617217  459147 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 19:37:40.670143  459147 cri.go:89] found id: "a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.670170  459147 cri.go:89] found id: "7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:40.670177  459147 cri.go:89] found id: ""
	I0717 19:37:40.670188  459147 logs.go:276] 2 containers: [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe]
	I0717 19:37:40.670265  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.675795  459147 ssh_runner.go:195] Run: which crictl
	I0717 19:37:40.681005  459147 logs.go:123] Gathering logs for storage-provisioner [a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c] ...
	I0717 19:37:40.681027  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2b43922786ee32d44d1d975d7f0fb5ccd4b91fffc7dc0e7b98d823bb6fc302c"
	I0717 19:37:40.729750  459147 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:40.729797  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:41.109749  459147 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:41.109806  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:41.128573  459147 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:41.128616  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 19:37:41.246119  459147 logs.go:123] Gathering logs for kube-apiserver [94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5] ...
	I0717 19:37:41.246163  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94d1d32be33b08d8620fb692b5d6ff1c8983ad8a9f8962a6d42c3b69247318c5"
	I0717 19:37:41.298281  459147 logs.go:123] Gathering logs for etcd [ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0] ...
	I0717 19:37:41.298342  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ade9a3d882a93ff3a3b5ed244fcf5c85c0255873c6b7f2dee67db03478c998f0"
	I0717 19:37:41.376160  459147 logs.go:123] Gathering logs for kube-controller-manager [e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5] ...
	I0717 19:37:41.376205  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14420efe38fae70e9a709e54fd96a249702ea85b37e5af16b661ad97942e8b5"
	I0717 19:37:41.444696  459147 logs.go:123] Gathering logs for container status ...
	I0717 19:37:41.444732  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:41.488191  459147 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:41.488225  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:41.554001  459147 logs.go:123] Gathering logs for coredns [9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002] ...
	I0717 19:37:41.554055  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9015174934a8d80c47ef9ef21eaf158f7c0d077466221e6fd79d60cc819d4002"
	I0717 19:37:41.596172  459147 logs.go:123] Gathering logs for kube-scheduler [5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df] ...
	I0717 19:37:41.596208  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b404425859ea6d941f0b6ab115258f3ce8034b9639661b60e67985bc482e4df"
	I0717 19:37:41.636145  459147 logs.go:123] Gathering logs for kube-proxy [ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77] ...
	I0717 19:37:41.636184  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab5470bd761391912517443a46e719da2371add65af096feefd87ce739c25a77"
	I0717 19:37:41.687058  459147 logs.go:123] Gathering logs for storage-provisioner [7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe] ...
	I0717 19:37:41.687092  459147 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7511bf4f30ac34d0eb7ff93ce5ab37758082e9f816a667c178e9d9724bb5defe"
	I0717 19:37:44.246334  459147 system_pods.go:59] 8 kube-system pods found
	I0717 19:37:44.246367  459147 system_pods.go:61] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.246373  459147 system_pods.go:61] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.246379  459147 system_pods.go:61] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.246384  459147 system_pods.go:61] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.246390  459147 system_pods.go:61] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.246394  459147 system_pods.go:61] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.246401  459147 system_pods.go:61] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.246406  459147 system_pods.go:61] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.246416  459147 system_pods.go:74] duration metric: took 3.953030235s to wait for pod list to return data ...
	I0717 19:37:44.246425  459147 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:37:44.249315  459147 default_sa.go:45] found service account: "default"
	I0717 19:37:44.249336  459147 default_sa.go:55] duration metric: took 2.904936ms for default service account to be created ...
	I0717 19:37:44.249344  459147 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:37:44.254845  459147 system_pods.go:86] 8 kube-system pods found
	I0717 19:37:44.254873  459147 system_pods.go:89] "coredns-5cfdc65f69-hk8t7" [fb861ad3-b9dc-4bd7-b84f-90a8fd5ca3b5] Running
	I0717 19:37:44.254879  459147 system_pods.go:89] "etcd-no-preload-713715" [bf2b0a70-5d33-4cd8-80a7-b3bd69bf2ebc] Running
	I0717 19:37:44.254883  459147 system_pods.go:89] "kube-apiserver-no-preload-713715" [daca9c97-3eb9-4d53-8cd2-8eb5fd7e2332] Running
	I0717 19:37:44.254888  459147 system_pods.go:89] "kube-controller-manager-no-preload-713715" [be475492-96cc-4738-a4a1-26ee6d843bda] Running
	I0717 19:37:44.254892  459147 system_pods.go:89] "kube-proxy-x85f5" [aaaf7ad0-8b1f-483c-977b-71ca6f2808c4] Running
	I0717 19:37:44.254895  459147 system_pods.go:89] "kube-scheduler-no-preload-713715" [b0ef7198-3b59-458a-9889-70d24909d81a] Running
	I0717 19:37:44.254902  459147 system_pods.go:89] "metrics-server-78fcd8795b-q2jgb" [4e882d43-dbeb-467a-980f-095e1f79dcf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:37:44.254908  459147 system_pods.go:89] "storage-provisioner" [785118d7-5d47-42fb-a3be-a13f7a837b2b] Running
	I0717 19:37:44.254916  459147 system_pods.go:126] duration metric: took 5.565796ms to wait for k8s-apps to be running ...
	I0717 19:37:44.254922  459147 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:37:44.254970  459147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:44.273765  459147 system_svc.go:56] duration metric: took 18.830474ms WaitForService to wait for kubelet
	I0717 19:37:44.273805  459147 kubeadm.go:582] duration metric: took 4m20.90212576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:37:44.273838  459147 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:37:44.278782  459147 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:37:44.278833  459147 node_conditions.go:123] node cpu capacity is 2
	I0717 19:37:44.278864  459147 node_conditions.go:105] duration metric: took 5.01941ms to run NodePressure ...
	I0717 19:37:44.278879  459147 start.go:241] waiting for startup goroutines ...
	I0717 19:37:44.278889  459147 start.go:246] waiting for cluster config update ...
	I0717 19:37:44.278906  459147 start.go:255] writing updated cluster config ...
	I0717 19:37:44.279303  459147 ssh_runner.go:195] Run: rm -f paused
	I0717 19:37:44.331361  459147 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:37:44.334137  459147 out.go:177] * Done! kubectl is now configured to use "no-preload-713715" cluster and "default" namespace by default
	I0717 19:37:42.676991  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:45.176025  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:42.298864  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:42.312076  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:37:42.312160  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:37:42.346742  459741 cri.go:89] found id: ""
	I0717 19:37:42.346767  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.346782  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:37:42.346787  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:37:42.346839  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:37:42.386100  459741 cri.go:89] found id: ""
	I0717 19:37:42.386131  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.386139  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:37:42.386145  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:37:42.386196  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:37:42.420604  459741 cri.go:89] found id: ""
	I0717 19:37:42.420634  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.420646  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:37:42.420656  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:37:42.420725  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:37:42.457305  459741 cri.go:89] found id: ""
	I0717 19:37:42.457338  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.457349  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:37:42.457357  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:37:42.457422  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:37:42.491383  459741 cri.go:89] found id: ""
	I0717 19:37:42.491418  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.491427  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:37:42.491434  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:37:42.491489  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:37:42.527500  459741 cri.go:89] found id: ""
	I0717 19:37:42.527533  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.527547  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:37:42.527557  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:37:42.527642  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:37:42.560724  459741 cri.go:89] found id: ""
	I0717 19:37:42.560759  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.560769  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:37:42.560778  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:37:42.560854  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:37:42.595812  459741 cri.go:89] found id: ""
	I0717 19:37:42.595846  459741 logs.go:276] 0 containers: []
	W0717 19:37:42.595858  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:37:42.595870  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:37:42.595886  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:37:42.610094  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:37:42.610129  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:37:42.683744  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:37:42.683763  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:37:42.683776  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:37:42.767187  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:37:42.767237  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 19:37:42.810319  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:37:42.810350  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:37:45.363245  459741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:37:45.378562  459741 kubeadm.go:597] duration metric: took 4m4.629259775s to restartPrimaryControlPlane
	W0717 19:37:45.378681  459741 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:37:45.378723  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:37:47.675784  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.174617  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:50.298107  459741 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.919332692s)
	I0717 19:37:50.298189  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:37:50.314299  459741 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:37:50.325112  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:37:50.335943  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:37:50.335970  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:37:50.336018  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:37:50.345604  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:37:50.345669  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:37:50.355339  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:37:50.365401  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:37:50.365468  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:37:50.378870  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.388710  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:37:50.388779  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:37:50.398847  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:37:50.408579  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:37:50.408648  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:37:50.419223  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:37:50.655878  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:37:52.175610  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:54.675346  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:57.175606  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:37:59.175665  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:01.675667  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:04.174856  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:06.175048  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:08.558767  459447 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.670364582s)
	I0717 19:38:08.558869  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:08.574972  459447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:08.585748  459447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:08.595641  459447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:08.595677  459447 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:08.595741  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 19:38:08.605738  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:08.605792  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:08.615415  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 19:38:08.625406  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:08.625465  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:08.635462  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.644862  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:08.644938  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:08.654840  459447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 19:38:08.664308  459447 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:08.664371  459447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:08.675152  459447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:08.726060  459447 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:08.726181  459447 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:08.868399  459447 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:08.868535  459447 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:08.868680  459447 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:09.092126  459447 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:09.094144  459447 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:09.094257  459447 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:09.094344  459447 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:09.094447  459447 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:09.094529  459447 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:09.094728  459447 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:09.094841  459447 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:09.094958  459447 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:09.095051  459447 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:09.095145  459447 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:09.095234  459447 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:09.095302  459447 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:09.095407  459447 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:09.220760  459447 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:09.395779  459447 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:09.485283  459447 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:09.582142  459447 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:09.644739  459447 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:09.645546  459447 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:09.648168  459447 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:08.175516  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:10.676234  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:09.651091  459447 out.go:204]   - Booting up control plane ...
	I0717 19:38:09.651237  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:09.651380  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:09.651472  459447 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:09.672137  459447 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:09.675016  459447 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:09.675265  459447 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:09.835705  459447 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:09.835804  459447 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:10.837657  459447 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002210874s
	I0717 19:38:10.837780  459447 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:15.841849  459447 kubeadm.go:310] [api-check] The API server is healthy after 5.002346886s
	I0717 19:38:15.853189  459447 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:15.871261  459447 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:15.901421  459447 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:15.901663  459447 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-378944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:15.914138  459447 kubeadm.go:310] [bootstrap-token] Using token: f20mgr.mp8yeahngp4xg46o
	I0717 19:38:12.678188  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.176507  459061 pod_ready.go:102] pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace has status "Ready":"False"
	I0717 19:38:15.916156  459447 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:15.916304  459447 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:15.926114  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:15.936748  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:15.940344  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:15.943530  459447 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:15.947036  459447 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:16.249457  459447 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:16.706293  459447 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:17.247816  459447 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:17.249321  459447 kubeadm.go:310] 
	I0717 19:38:17.249431  459447 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:17.249453  459447 kubeadm.go:310] 
	I0717 19:38:17.249552  459447 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:17.249563  459447 kubeadm.go:310] 
	I0717 19:38:17.249594  459447 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:17.249677  459447 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:17.249768  459447 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:17.249791  459447 kubeadm.go:310] 
	I0717 19:38:17.249868  459447 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:17.249878  459447 kubeadm.go:310] 
	I0717 19:38:17.249949  459447 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:17.249968  459447 kubeadm.go:310] 
	I0717 19:38:17.250016  459447 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:17.250083  459447 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:17.250143  459447 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:17.250149  459447 kubeadm.go:310] 
	I0717 19:38:17.250269  459447 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:17.250371  459447 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:17.250381  459447 kubeadm.go:310] 
	I0717 19:38:17.250484  459447 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.250605  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:17.250663  459447 kubeadm.go:310] 	--control-plane 
	I0717 19:38:17.250677  459447 kubeadm.go:310] 
	I0717 19:38:17.250771  459447 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:17.250784  459447 kubeadm.go:310] 
	I0717 19:38:17.250870  459447 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token f20mgr.mp8yeahngp4xg46o \
	I0717 19:38:17.251029  459447 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:17.252262  459447 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:17.252302  459447 cni.go:84] Creating CNI manager for ""
	I0717 19:38:17.252318  459447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:17.254910  459447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:17.669679  459061 pod_ready.go:81] duration metric: took 4m0.000889569s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" ...
	E0717 19:38:17.669706  459061 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-mtnc6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 19:38:17.669726  459061 pod_ready.go:38] duration metric: took 4m8.910120635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:17.669768  459061 kubeadm.go:597] duration metric: took 4m18.632716414s to restartPrimaryControlPlane
	W0717 19:38:17.669838  459061 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 19:38:17.669870  459061 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:38:17.256192  459447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:17.268586  459447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:17.292455  459447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:17.292536  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.292623  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-378944 minikube.k8s.io/updated_at=2024_07_17T19_38_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=default-k8s-diff-port-378944 minikube.k8s.io/primary=true
	I0717 19:38:17.325184  459447 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:17.469427  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:17.969845  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.470139  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:18.969524  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.469856  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:19.970486  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.470263  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:20.970157  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.470331  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:21.969885  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.469572  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:22.969898  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.470149  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:23.970327  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.470275  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:24.970386  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.469631  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:25.969749  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.469512  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:26.970082  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.469534  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:27.970318  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.470232  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:28.970033  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.469586  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:29.969588  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.469599  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:30.970505  459447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:31.119385  459447 kubeadm.go:1113] duration metric: took 13.826924083s to wait for elevateKubeSystemPrivileges
	I0717 19:38:31.119428  459447 kubeadm.go:394] duration metric: took 5m11.355625204s to StartCluster
	I0717 19:38:31.119449  459447 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.119548  459447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:38:31.121296  459447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:38:31.121610  459447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.238 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:38:31.121724  459447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:38:31.121802  459447 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121827  459447 config.go:182] Loaded profile config "default-k8s-diff-port-378944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:38:31.121846  459447 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-378944"
	I0717 19:38:31.121849  459447 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-378944"
	I0717 19:38:31.121873  459447 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-378944"
	W0717 19:38:31.121883  459447 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:38:31.121899  459447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-378944"
	I0717 19:38:31.121906  459447 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.121915  459447 addons.go:243] addon metrics-server should already be in state true
	I0717 19:38:31.121927  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.121969  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.122322  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122339  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122366  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122379  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.122388  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.122411  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.123339  459447 out.go:177] * Verifying Kubernetes components...
	I0717 19:38:31.129194  459447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:38:31.139023  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0717 19:38:31.139292  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0717 19:38:31.139632  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.139775  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.140272  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140292  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140684  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.140710  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.140731  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141234  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.141257  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.141425  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0717 19:38:31.141431  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.141919  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.142149  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.142181  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.142410  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.142435  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.142824  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.143055  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.147020  459447 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-378944"
	W0717 19:38:31.147043  459447 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:38:31.147076  459447 host.go:66] Checking if "default-k8s-diff-port-378944" exists ...
	I0717 19:38:31.147428  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.147462  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.158908  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
	I0717 19:38:31.159534  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.160413  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.160438  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.161313  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.161588  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.161794  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:38:31.162315  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.162935  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.162963  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.163360  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.163618  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.164401  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165089  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0717 19:38:31.165402  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.165493  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.166082  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.166108  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.166133  459447 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:38:31.166520  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.166951  459447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:38:31.166995  459447 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:38:31.167294  459447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:38:31.167678  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:38:31.167700  459447 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:38:31.167725  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.168668  459447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.168686  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:38:31.168704  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.171358  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.171986  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172013  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172236  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172379  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.172558  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.172646  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.172749  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.172778  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.172902  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.173186  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.173396  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.173570  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.173711  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.184779  459447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 19:38:31.185400  459447 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:38:31.186325  459447 main.go:141] libmachine: Using API Version  1
	I0717 19:38:31.186350  459447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:38:31.186736  459447 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:38:31.186981  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetState
	I0717 19:38:31.188627  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .DriverName
	I0717 19:38:31.188841  459447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.188860  459447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:38:31.188881  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHHostname
	I0717 19:38:31.191674  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192104  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:42:f3", ip: ""} in network mk-default-k8s-diff-port-378944: {Iface:virbr2 ExpiryTime:2024-07-17 20:33:04 +0000 UTC Type:0 Mac:52:54:00:45:42:f3 Iaid: IPaddr:192.168.50.238 Prefix:24 Hostname:default-k8s-diff-port-378944 Clientid:01:52:54:00:45:42:f3}
	I0717 19:38:31.192129  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | domain default-k8s-diff-port-378944 has defined IP address 192.168.50.238 and MAC address 52:54:00:45:42:f3 in network mk-default-k8s-diff-port-378944
	I0717 19:38:31.192375  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHPort
	I0717 19:38:31.192868  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHKeyPath
	I0717 19:38:31.193084  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .GetSSHUsername
	I0717 19:38:31.193250  459447 sshutil.go:53] new ssh client: &{IP:192.168.50.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/default-k8s-diff-port-378944/id_rsa Username:docker}
	I0717 19:38:31.351524  459447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:38:31.365996  459447 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376135  459447 node_ready.go:49] node "default-k8s-diff-port-378944" has status "Ready":"True"
	I0717 19:38:31.376168  459447 node_ready.go:38] duration metric: took 10.135533ms for node "default-k8s-diff-port-378944" to be "Ready" ...
	I0717 19:38:31.376182  459447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:31.385746  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:31.471924  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:38:31.488412  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:38:31.488440  459447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:38:31.489634  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:38:31.578028  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:38:31.578059  459447 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:38:31.653567  459447 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:31.653598  459447 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:38:31.692100  459447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:38:32.700716  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.228741753s)
	I0717 19:38:32.700795  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700796  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.211127639s)
	I0717 19:38:32.700851  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.700869  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.700808  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703149  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703155  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703183  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703193  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703202  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703163  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703235  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703254  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.703267  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.703505  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703517  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.703529  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.703554  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.703520  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778305  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:32.778331  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:32.778693  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:32.778779  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:32.778733  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:32.942079  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:32.942114  459447 pod_ready.go:81] duration metric: took 1.556334407s for pod "coredns-7db6d8ff4d-jnwgp" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:32.942128  459447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.018197  459447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.326052616s)
	I0717 19:38:33.018262  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018277  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018625  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018649  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018659  459447 main.go:141] libmachine: Making call to close driver server
	I0717 19:38:33.018669  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) Calling .Close
	I0717 19:38:33.018696  459447 main.go:141] libmachine: (default-k8s-diff-port-378944) DBG | Closing plugin on server side
	I0717 19:38:33.018956  459447 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:38:33.018975  459447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:38:33.018996  459447 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-378944"
	I0717 19:38:33.021803  459447 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:38:33.023032  459447 addons.go:510] duration metric: took 1.901306809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:38:33.949013  459447 pod_ready.go:92] pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.949038  459447 pod_ready.go:81] duration metric: took 1.006901797s for pod "coredns-7db6d8ff4d-xbtct" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.949050  459447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953373  459447 pod_ready.go:92] pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.953393  459447 pod_ready.go:81] duration metric: took 4.33631ms for pod "etcd-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.953404  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957845  459447 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.957869  459447 pod_ready.go:81] duration metric: took 4.456882ms for pod "kube-apiserver-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.957881  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962465  459447 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:33.962488  459447 pod_ready.go:81] duration metric: took 4.598385ms for pod "kube-controller-manager-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:33.962500  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170244  459447 pod_ready.go:92] pod "kube-proxy-vhjq4" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.170274  459447 pod_ready.go:81] duration metric: took 207.766629ms for pod "kube-proxy-vhjq4" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.170284  459447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570267  459447 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace has status "Ready":"True"
	I0717 19:38:34.570299  459447 pod_ready.go:81] duration metric: took 400.008056ms for pod "kube-scheduler-default-k8s-diff-port-378944" in "kube-system" namespace to be "Ready" ...
	I0717 19:38:34.570324  459447 pod_ready.go:38] duration metric: took 3.194102991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:38:34.570356  459447 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:38:34.570415  459447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:38:34.590893  459447 api_server.go:72] duration metric: took 3.469242847s to wait for apiserver process to appear ...
	I0717 19:38:34.590918  459447 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:38:34.590939  459447 api_server.go:253] Checking apiserver healthz at https://192.168.50.238:8444/healthz ...
	I0717 19:38:34.596086  459447 api_server.go:279] https://192.168.50.238:8444/healthz returned 200:
	ok
	I0717 19:38:34.597189  459447 api_server.go:141] control plane version: v1.30.2
	I0717 19:38:34.597213  459447 api_server.go:131] duration metric: took 6.288225ms to wait for apiserver health ...
	I0717 19:38:34.597221  459447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:38:34.774523  459447 system_pods.go:59] 9 kube-system pods found
	I0717 19:38:34.774563  459447 system_pods.go:61] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:34.774571  459447 system_pods.go:61] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:34.774577  459447 system_pods.go:61] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:34.774582  459447 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:34.774590  459447 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:34.774595  459447 system_pods.go:61] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:34.774598  459447 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:34.774607  459447 system_pods.go:61] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:34.774613  459447 system_pods.go:61] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:34.774624  459447 system_pods.go:74] duration metric: took 177.395337ms to wait for pod list to return data ...
	I0717 19:38:34.774636  459447 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:38:34.970004  459447 default_sa.go:45] found service account: "default"
	I0717 19:38:34.970040  459447 default_sa.go:55] duration metric: took 195.394993ms for default service account to be created ...
	I0717 19:38:34.970054  459447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:38:35.173288  459447 system_pods.go:86] 9 kube-system pods found
	I0717 19:38:35.173327  459447 system_pods.go:89] "coredns-7db6d8ff4d-jnwgp" [f86efa81-cbe0-44a7-888f-639af3dc58ad] Running
	I0717 19:38:35.173336  459447 system_pods.go:89] "coredns-7db6d8ff4d-xbtct" [c24ce9ab-babb-4589-8046-e8e2d4ca68af] Running
	I0717 19:38:35.173343  459447 system_pods.go:89] "etcd-default-k8s-diff-port-378944" [b15d7ac0-b014-4fed-8e03-3b2eb8b23911] Running
	I0717 19:38:35.173352  459447 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-378944" [78cd796b-d751-44dd-91e7-85b48c77d87c] Running
	I0717 19:38:35.173359  459447 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-378944" [4981a20d-ce96-4c27-9b14-17e4a8a18a7c] Running
	I0717 19:38:35.173365  459447 system_pods.go:89] "kube-proxy-vhjq4" [092af79d-ebc0-4e16-97ef-725195e95344] Running
	I0717 19:38:35.173370  459447 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-378944" [60a0717a-ad29-4360-a514-afc1081f115c] Running
	I0717 19:38:35.173377  459447 system_pods.go:89] "metrics-server-569cc877fc-hvknj" [d214e760-d49e-4554-85c2-77e5da1b150f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:38:35.173384  459447 system_pods.go:89] "storage-provisioner" [153a102e-f07b-46b4-a9d0-9e754237ca6e] Running
	I0717 19:38:35.173397  459447 system_pods.go:126] duration metric: took 203.335308ms to wait for k8s-apps to be running ...
	I0717 19:38:35.173406  459447 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:38:35.173471  459447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:35.188943  459447 system_svc.go:56] duration metric: took 15.522808ms WaitForService to wait for kubelet
	I0717 19:38:35.188980  459447 kubeadm.go:582] duration metric: took 4.067341756s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:38:35.189006  459447 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:38:35.369694  459447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:38:35.369723  459447 node_conditions.go:123] node cpu capacity is 2
	I0717 19:38:35.369748  459447 node_conditions.go:105] duration metric: took 180.736346ms to run NodePressure ...
	I0717 19:38:35.369764  459447 start.go:241] waiting for startup goroutines ...
	I0717 19:38:35.369773  459447 start.go:246] waiting for cluster config update ...
	I0717 19:38:35.369787  459447 start.go:255] writing updated cluster config ...
	I0717 19:38:35.370064  459447 ssh_runner.go:195] Run: rm -f paused
	I0717 19:38:35.422285  459447 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:38:35.424315  459447 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-378944" cluster and "default" namespace by default
	I0717 19:38:49.633874  459061 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.96396735s)
	I0717 19:38:49.633958  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:38:49.653668  459061 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:38:49.665421  459061 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:38:49.677405  459061 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:38:49.677433  459061 kubeadm.go:157] found existing configuration files:
	
	I0717 19:38:49.677485  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:38:49.688418  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:38:49.688515  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:38:49.699121  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:38:49.709505  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:38:49.709622  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:38:49.720533  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.731191  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:38:49.731259  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:38:49.741071  459061 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:38:49.750483  459061 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:38:49.750540  459061 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:38:49.759991  459061 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:38:49.814169  459061 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 19:38:49.814235  459061 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:38:49.977655  459061 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:38:49.977811  459061 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:38:49.977922  459061 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:38:50.204096  459061 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:38:50.206849  459061 out.go:204]   - Generating certificates and keys ...
	I0717 19:38:50.206956  459061 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:38:50.207032  459061 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:38:50.207102  459061 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:38:50.207227  459061 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:38:50.207341  459061 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:38:50.207388  459061 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:38:50.207448  459061 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:38:50.207511  459061 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:38:50.207618  459061 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:38:50.207732  459061 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:38:50.207787  459061 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:38:50.207868  459061 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:38:50.298049  459061 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:38:50.456369  459061 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:38:50.649923  459061 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:38:50.771710  459061 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:38:50.939506  459061 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:38:50.939999  459061 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:38:50.942645  459061 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:38:50.944456  459061 out.go:204]   - Booting up control plane ...
	I0717 19:38:50.944563  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:38:50.944648  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:38:50.944906  459061 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:38:50.963779  459061 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:38:50.964946  459061 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:38:50.964999  459061 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:38:51.112106  459061 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:38:51.112222  459061 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:38:51.613966  459061 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.041018ms
	I0717 19:38:51.614079  459061 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:38:56.617120  459061 kubeadm.go:310] [api-check] The API server is healthy after 5.003106336s
	I0717 19:38:56.635312  459061 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:38:56.653249  459061 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:38:56.688277  459061 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:38:56.688570  459061 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-637675 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:38:56.703781  459061 kubeadm.go:310] [bootstrap-token] Using token: 5c1d8d.hedm6ka56xpdzroz
	I0717 19:38:56.705437  459061 out.go:204]   - Configuring RBAC rules ...
	I0717 19:38:56.705575  459061 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:38:56.712968  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:38:56.723899  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:38:56.731634  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:38:56.737169  459061 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:38:56.745083  459061 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:38:57.024680  459061 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:38:57.477396  459061 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:38:58.025476  459061 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:38:58.026512  459061 kubeadm.go:310] 
	I0717 19:38:58.026631  459061 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:38:58.026655  459061 kubeadm.go:310] 
	I0717 19:38:58.026772  459061 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:38:58.026790  459061 kubeadm.go:310] 
	I0717 19:38:58.026828  459061 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:38:58.026905  459061 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:38:58.026971  459061 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:38:58.026979  459061 kubeadm.go:310] 
	I0717 19:38:58.027070  459061 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:38:58.027094  459061 kubeadm.go:310] 
	I0717 19:38:58.027163  459061 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:38:58.027171  459061 kubeadm.go:310] 
	I0717 19:38:58.027242  459061 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:38:58.027341  459061 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:38:58.027431  459061 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:38:58.027442  459061 kubeadm.go:310] 
	I0717 19:38:58.027547  459061 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:38:58.027663  459061 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:38:58.027673  459061 kubeadm.go:310] 
	I0717 19:38:58.027788  459061 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.027949  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a \
	I0717 19:38:58.027998  459061 kubeadm.go:310] 	--control-plane 
	I0717 19:38:58.028012  459061 kubeadm.go:310] 
	I0717 19:38:58.028123  459061 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:38:58.028133  459061 kubeadm.go:310] 
	I0717 19:38:58.028235  459061 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5c1d8d.hedm6ka56xpdzroz \
	I0717 19:38:58.028355  459061 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa0140f2aad76821856736ad1e771a53a4f95efe0123fb861395a05b2b1f6a1a 
	I0717 19:38:58.028891  459061 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:38:58.029012  459061 cni.go:84] Creating CNI manager for ""
	I0717 19:38:58.029029  459061 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:38:58.031915  459061 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:38:58.033543  459061 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:38:58.044441  459061 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:38:58.062984  459061 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:38:58.063092  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.063115  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-637675 minikube.k8s.io/updated_at=2024_07_17T19_38_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ea5c2d8818055de88db951b296600d4e926998e6 minikube.k8s.io/name=embed-certs-637675 minikube.k8s.io/primary=true
	I0717 19:38:58.088566  459061 ops.go:34] apiserver oom_adj: -16
	I0717 19:38:58.243142  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:58.743578  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.244162  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:38:59.743393  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.244096  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:00.743309  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.244049  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:01.743222  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.243771  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:02.743459  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.243303  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:03.743299  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.243263  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:04.743572  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.243876  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:05.743567  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.244040  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:06.743302  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.244174  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:07.744243  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.244108  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:08.744208  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.243712  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:09.743417  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.243321  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:10.743234  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.244006  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:11.744244  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.243673  459061 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:39:12.373286  459061 kubeadm.go:1113] duration metric: took 14.310267908s to wait for elevateKubeSystemPrivileges
	I0717 19:39:12.373331  459061 kubeadm.go:394] duration metric: took 5m13.390297719s to StartCluster
	I0717 19:39:12.373357  459061 settings.go:142] acquiring lock: {Name:mk0123487e2d9cc68ee99d6e5e942cd09e194f17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.373461  459061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:39:12.375404  459061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/kubeconfig: {Name:mk8aae04c80bfd500c87848513384d9459be2ceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:39:12.375739  459061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:39:12.375786  459061 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:39:12.375875  459061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-637675"
	I0717 19:39:12.375919  459061 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-637675"
	W0717 19:39:12.375933  459061 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:39:12.375967  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.375981  459061 config.go:182] Loaded profile config "embed-certs-637675": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:39:12.376031  459061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-637675"
	I0717 19:39:12.376062  459061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-637675"
	I0717 19:39:12.376333  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376359  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376426  459061 addons.go:69] Setting metrics-server=true in profile "embed-certs-637675"
	I0717 19:39:12.376494  459061 addons.go:234] Setting addon metrics-server=true in "embed-certs-637675"
	W0717 19:39:12.376526  459061 addons.go:243] addon metrics-server should already be in state true
	I0717 19:39:12.376596  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.376427  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.376672  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.376981  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.377140  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.378935  459061 out.go:177] * Verifying Kubernetes components...
	I0717 19:39:12.380094  459061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:39:12.396180  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0717 19:39:12.396769  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.397333  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.397359  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.397449  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44781
	I0717 19:39:12.397580  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40945
	I0717 19:39:12.397773  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.397893  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398045  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.398343  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398355  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398387  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.398430  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.398488  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.398499  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.398660  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.398798  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.399295  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.399322  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.399545  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.403398  459061 addons.go:234] Setting addon default-storageclass=true in "embed-certs-637675"
	W0717 19:39:12.403420  459061 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:39:12.403451  459061 host.go:66] Checking if "embed-certs-637675" exists ...
	I0717 19:39:12.403872  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.403898  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.415595  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0717 19:39:12.416232  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.417013  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.417033  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.417587  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.418029  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.419082  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0717 19:39:12.420074  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.420699  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.420856  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.420875  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.421414  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.421614  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.423149  459061 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:39:12.423248  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0717 19:39:12.423428  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.423575  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.424023  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.424076  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.424418  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.424571  459061 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.424588  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:39:12.424608  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.424944  459061 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19282-392903/.minikube/bin/docker-machine-driver-kvm2
	I0717 19:39:12.424980  459061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:39:12.425348  459061 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:39:12.426757  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:39:12.426781  459061 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:39:12.426853  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.427990  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.428571  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.428594  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.429076  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.429456  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.429803  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.430161  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.430952  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.432978  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.433047  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.433185  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.433366  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.433623  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.433978  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.441066  459061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0717 19:39:12.441557  459061 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:39:12.442011  459061 main.go:141] libmachine: Using API Version  1
	I0717 19:39:12.442029  459061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:39:12.442447  459061 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:39:12.442677  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetState
	I0717 19:39:12.444789  459061 main.go:141] libmachine: (embed-certs-637675) Calling .DriverName
	I0717 19:39:12.444999  459061 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.445015  459061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:39:12.445036  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHHostname
	I0717 19:39:12.447829  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448361  459061 main.go:141] libmachine: (embed-certs-637675) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:d5:fa", ip: ""} in network mk-embed-certs-637675: {Iface:virbr1 ExpiryTime:2024-07-17 20:33:43 +0000 UTC Type:0 Mac:52:54:00:33:d5:fa Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:embed-certs-637675 Clientid:01:52:54:00:33:d5:fa}
	I0717 19:39:12.448390  459061 main.go:141] libmachine: (embed-certs-637675) DBG | domain embed-certs-637675 has defined IP address 192.168.39.140 and MAC address 52:54:00:33:d5:fa in network mk-embed-certs-637675
	I0717 19:39:12.448577  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHPort
	I0717 19:39:12.448770  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHKeyPath
	I0717 19:39:12.448936  459061 main.go:141] libmachine: (embed-certs-637675) Calling .GetSSHUsername
	I0717 19:39:12.449070  459061 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/embed-certs-637675/id_rsa Username:docker}
	I0717 19:39:12.728350  459061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:39:12.774599  459061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787047  459061 node_ready.go:49] node "embed-certs-637675" has status "Ready":"True"
	I0717 19:39:12.787080  459061 node_ready.go:38] duration metric: took 12.442277ms for node "embed-certs-637675" to be "Ready" ...
	I0717 19:39:12.787092  459061 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:12.794421  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:12.884786  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:39:12.916243  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:39:12.956508  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:39:12.956539  459061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:39:13.012727  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:39:13.012757  459061 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:39:13.090259  459061 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.090288  459061 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:39:13.189147  459061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:39:13.743500  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743529  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.743886  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.743943  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.743967  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.743984  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.743993  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.744243  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.744292  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.744318  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745277  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745344  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745605  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745624  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745632  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.745642  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.745646  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.745835  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.745861  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.745876  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.760884  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:13.760909  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:13.761330  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:13.761352  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:13.761392  459061 main.go:141] libmachine: (embed-certs-637675) DBG | Closing plugin on server side
	I0717 19:39:13.809721  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:13.809743  459061 pod_ready.go:81] duration metric: took 1.015289517s for pod "coredns-7db6d8ff4d-45xn7" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:13.809753  459061 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.027460  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027489  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.027856  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.027878  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.027889  459061 main.go:141] libmachine: Making call to close driver server
	I0717 19:39:14.027898  459061 main.go:141] libmachine: (embed-certs-637675) Calling .Close
	I0717 19:39:14.028130  459061 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:39:14.028146  459061 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:39:14.028177  459061 addons.go:475] Verifying addon metrics-server=true in "embed-certs-637675"
	I0717 19:39:14.030113  459061 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 19:39:14.031442  459061 addons.go:510] duration metric: took 1.65566168s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 19:39:14.816503  459061 pod_ready.go:92] pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.816527  459061 pod_ready.go:81] duration metric: took 1.006767634s for pod "coredns-7db6d8ff4d-nw8g8" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.816536  459061 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820749  459061 pod_ready.go:92] pod "etcd-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.820768  459061 pod_ready.go:81] duration metric: took 4.225695ms for pod "etcd-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.820775  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824793  459061 pod_ready.go:92] pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.824812  459061 pod_ready.go:81] duration metric: took 4.02987ms for pod "kube-apiserver-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.824823  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828718  459061 pod_ready.go:92] pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:14.828738  459061 pod_ready.go:81] duration metric: took 3.907636ms for pod "kube-controller-manager-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:14.828748  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178249  459061 pod_ready.go:92] pod "kube-proxy-dns5j" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.178276  459061 pod_ready.go:81] duration metric: took 349.519823ms for pod "kube-proxy-dns5j" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.178289  459061 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578418  459061 pod_ready.go:92] pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace has status "Ready":"True"
	I0717 19:39:15.578445  459061 pod_ready.go:81] duration metric: took 400.149092ms for pod "kube-scheduler-embed-certs-637675" in "kube-system" namespace to be "Ready" ...
	I0717 19:39:15.578454  459061 pod_ready.go:38] duration metric: took 2.791350468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:39:15.578471  459061 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:39:15.578526  459061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:39:15.597456  459061 api_server.go:72] duration metric: took 3.221674147s to wait for apiserver process to appear ...
	I0717 19:39:15.597483  459061 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:39:15.597503  459061 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0717 19:39:15.602054  459061 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0717 19:39:15.603214  459061 api_server.go:141] control plane version: v1.30.2
	I0717 19:39:15.603238  459061 api_server.go:131] duration metric: took 5.7478ms to wait for apiserver health ...
	I0717 19:39:15.603248  459061 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:39:15.783262  459061 system_pods.go:59] 9 kube-system pods found
	I0717 19:39:15.783295  459061 system_pods.go:61] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:15.783300  459061 system_pods.go:61] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:15.783303  459061 system_pods.go:61] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:15.783307  459061 system_pods.go:61] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:15.783310  459061 system_pods.go:61] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:15.783312  459061 system_pods.go:61] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:15.783315  459061 system_pods.go:61] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:15.783321  459061 system_pods.go:61] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:15.783325  459061 system_pods.go:61] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:15.783331  459061 system_pods.go:74] duration metric: took 180.078172ms to wait for pod list to return data ...
	I0717 19:39:15.783339  459061 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:39:15.978711  459061 default_sa.go:45] found service account: "default"
	I0717 19:39:15.978747  459061 default_sa.go:55] duration metric: took 195.400502ms for default service account to be created ...
	I0717 19:39:15.978762  459061 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:39:16.181968  459061 system_pods.go:86] 9 kube-system pods found
	I0717 19:39:16.181997  459061 system_pods.go:89] "coredns-7db6d8ff4d-45xn7" [9c936942-55bb-44c9-b446-365ec316c390] Running
	I0717 19:39:16.182003  459061 system_pods.go:89] "coredns-7db6d8ff4d-nw8g8" [0313a484-73be-49e2-a483-b15f47abc24a] Running
	I0717 19:39:16.182007  459061 system_pods.go:89] "etcd-embed-certs-637675" [d83ac63c-5eb5-40f0-bf58-37c048642b72] Running
	I0717 19:39:16.182011  459061 system_pods.go:89] "kube-apiserver-embed-certs-637675" [0b60ef89-e78c-4e24-b391-a5d4930d0f5f] Running
	I0717 19:39:16.182016  459061 system_pods.go:89] "kube-controller-manager-embed-certs-637675" [b2da7425-19f4-4435-8a30-17744a3289b0] Running
	I0717 19:39:16.182021  459061 system_pods.go:89] "kube-proxy-dns5j" [4d248751-6ee4-460d-b608-be6586613e3d] Running
	I0717 19:39:16.182025  459061 system_pods.go:89] "kube-scheduler-embed-certs-637675" [43f463da-858a-4261-b7a1-01e504e157f6] Running
	I0717 19:39:16.182033  459061 system_pods.go:89] "metrics-server-569cc877fc-jf42d" [c92dbb96-5721-4ff9-a428-9215223d2b83] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:39:16.182042  459061 system_pods.go:89] "storage-provisioner" [11a18e44-b523-46b2-a890-dd693460e032] Running
	I0717 19:39:16.182049  459061 system_pods.go:126] duration metric: took 203.281636ms to wait for k8s-apps to be running ...
	I0717 19:39:16.182057  459061 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:39:16.182101  459061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:16.198464  459061 system_svc.go:56] duration metric: took 16.391405ms WaitForService to wait for kubelet
	I0717 19:39:16.198504  459061 kubeadm.go:582] duration metric: took 3.822728067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:39:16.198531  459061 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:39:16.378407  459061 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:39:16.378440  459061 node_conditions.go:123] node cpu capacity is 2
	I0717 19:39:16.378451  459061 node_conditions.go:105] duration metric: took 179.91335ms to run NodePressure ...
	I0717 19:39:16.378465  459061 start.go:241] waiting for startup goroutines ...
	I0717 19:39:16.378476  459061 start.go:246] waiting for cluster config update ...
	I0717 19:39:16.378489  459061 start.go:255] writing updated cluster config ...
	I0717 19:39:16.378845  459061 ssh_runner.go:195] Run: rm -f paused
	I0717 19:39:16.431808  459061 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 19:39:16.433648  459061 out.go:177] * Done! kubectl is now configured to use "embed-certs-637675" cluster and "default" namespace by default
	I0717 19:39:46.819105  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:39:46.819209  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:39:46.820837  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:39:46.820940  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:39:46.821010  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:39:46.821148  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:39:46.821282  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:39:46.821377  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:39:46.823092  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:39:46.823190  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:39:46.823280  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:39:46.823409  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:39:46.823509  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:39:46.823629  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:39:46.823715  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:39:46.823802  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:39:46.823885  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:39:46.823975  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:39:46.824067  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:39:46.824109  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:39:46.824183  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:39:46.824248  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:39:46.824309  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:39:46.824409  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:39:46.824506  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:39:46.824642  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:39:46.824729  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:39:46.824775  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:39:46.824869  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:39:46.826222  459741 out.go:204]   - Booting up control plane ...
	I0717 19:39:46.826334  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:39:46.826483  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:39:46.826566  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:39:46.826677  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:39:46.826855  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:39:46.826954  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:39:46.827061  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827286  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827365  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827537  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827618  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.827814  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.827916  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828105  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828210  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:39:46.828440  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:39:46.828449  459741 kubeadm.go:310] 
	I0717 19:39:46.828482  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:39:46.828544  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:39:46.828555  459741 kubeadm.go:310] 
	I0717 19:39:46.828601  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:39:46.828648  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:39:46.828787  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:39:46.828795  459741 kubeadm.go:310] 
	I0717 19:39:46.828928  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:39:46.828975  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:39:46.829023  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:39:46.829033  459741 kubeadm.go:310] 
	I0717 19:39:46.829156  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:39:46.829280  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:39:46.829288  459741 kubeadm.go:310] 
	I0717 19:39:46.829430  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:39:46.829538  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:39:46.829640  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:39:46.829753  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:39:46.829812  459741 kubeadm.go:310] 
	W0717 19:39:46.829883  459741 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 19:39:46.829939  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 19:39:47.290949  459741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:39:47.307166  459741 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:39:47.318260  459741 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:39:47.318283  459741 kubeadm.go:157] found existing configuration files:
	
	I0717 19:39:47.318336  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:39:47.328087  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:39:47.328150  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:39:47.339029  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:39:47.348854  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:39:47.348913  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:39:47.358498  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.368592  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:39:47.368651  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:39:47.379802  459741 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:39:47.391069  459741 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:39:47.391139  459741 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:39:47.402410  459741 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:39:47.620822  459741 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:41:43.630999  459741 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 19:41:43.631161  459741 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 19:41:43.631238  459741 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 19:41:43.631322  459741 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:41:43.631452  459741 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:41:43.631595  459741 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:41:43.631767  459741 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:41:43.631852  459741 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:41:43.633956  459741 out.go:204]   - Generating certificates and keys ...
	I0717 19:41:43.634058  459741 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:41:43.634160  459741 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:41:43.634292  459741 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 19:41:43.634382  459741 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 19:41:43.634457  459741 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 19:41:43.634560  459741 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 19:41:43.634646  459741 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 19:41:43.634743  459741 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 19:41:43.634848  459741 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 19:41:43.634977  459741 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 19:41:43.635038  459741 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 19:41:43.635088  459741 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:41:43.635129  459741 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:41:43.635173  459741 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:41:43.635240  459741 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:41:43.635326  459741 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:41:43.635477  459741 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:41:43.635594  459741 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:41:43.635675  459741 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:41:43.635758  459741 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:41:43.637529  459741 out.go:204]   - Booting up control plane ...
	I0717 19:41:43.637719  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:41:43.637857  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:41:43.637948  459741 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:41:43.638086  459741 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:41:43.638278  459741 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:41:43.638336  459741 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 19:41:43.638427  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638656  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.638732  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.638966  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639046  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639310  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639407  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639665  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639769  459741 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 19:41:43.639950  459741 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 19:41:43.639969  459741 kubeadm.go:310] 
	I0717 19:41:43.640006  459741 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 19:41:43.640047  459741 kubeadm.go:310] 		timed out waiting for the condition
	I0717 19:41:43.640056  459741 kubeadm.go:310] 
	I0717 19:41:43.640101  459741 kubeadm.go:310] 	This error is likely caused by:
	I0717 19:41:43.640148  459741 kubeadm.go:310] 		- The kubelet is not running
	I0717 19:41:43.640247  459741 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 19:41:43.640255  459741 kubeadm.go:310] 
	I0717 19:41:43.640365  459741 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 19:41:43.640398  459741 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 19:41:43.640426  459741 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 19:41:43.640434  459741 kubeadm.go:310] 
	I0717 19:41:43.640580  459741 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 19:41:43.640664  459741 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 19:41:43.640676  459741 kubeadm.go:310] 
	I0717 19:41:43.640772  459741 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 19:41:43.640849  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 19:41:43.640912  459741 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 19:41:43.640975  459741 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 19:41:43.640997  459741 kubeadm.go:310] 
	I0717 19:41:43.641050  459741 kubeadm.go:394] duration metric: took 8m2.947491611s to StartCluster
	I0717 19:41:43.641102  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 19:41:43.641159  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 19:41:43.691693  459741 cri.go:89] found id: ""
	I0717 19:41:43.691734  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.691746  459741 logs.go:278] No container was found matching "kube-apiserver"
	I0717 19:41:43.691755  459741 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 19:41:43.691822  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 19:41:43.730266  459741 cri.go:89] found id: ""
	I0717 19:41:43.730301  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.730311  459741 logs.go:278] No container was found matching "etcd"
	I0717 19:41:43.730319  459741 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 19:41:43.730401  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 19:41:43.766878  459741 cri.go:89] found id: ""
	I0717 19:41:43.766907  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.766916  459741 logs.go:278] No container was found matching "coredns"
	I0717 19:41:43.766922  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 19:41:43.767012  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 19:41:43.810002  459741 cri.go:89] found id: ""
	I0717 19:41:43.810040  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.810051  459741 logs.go:278] No container was found matching "kube-scheduler"
	I0717 19:41:43.810059  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 19:41:43.810133  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 19:41:43.846561  459741 cri.go:89] found id: ""
	I0717 19:41:43.846621  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.846637  459741 logs.go:278] No container was found matching "kube-proxy"
	I0717 19:41:43.846645  459741 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 19:41:43.846715  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 19:41:43.884047  459741 cri.go:89] found id: ""
	I0717 19:41:43.884080  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.884091  459741 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 19:41:43.884099  459741 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 19:41:43.884224  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 19:41:43.931636  459741 cri.go:89] found id: ""
	I0717 19:41:43.931677  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.931691  459741 logs.go:278] No container was found matching "kindnet"
	I0717 19:41:43.931699  459741 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 19:41:43.931768  459741 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 19:41:43.969202  459741 cri.go:89] found id: ""
	I0717 19:41:43.969240  459741 logs.go:276] 0 containers: []
	W0717 19:41:43.969260  459741 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 19:41:43.969275  459741 logs.go:123] Gathering logs for kubelet ...
	I0717 19:41:43.969296  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 19:41:44.026443  459741 logs.go:123] Gathering logs for dmesg ...
	I0717 19:41:44.026500  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 19:41:44.042750  459741 logs.go:123] Gathering logs for describe nodes ...
	I0717 19:41:44.042788  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 19:41:44.140053  459741 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 19:41:44.140079  459741 logs.go:123] Gathering logs for CRI-O ...
	I0717 19:41:44.140093  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 19:41:44.263660  459741 logs.go:123] Gathering logs for container status ...
	I0717 19:41:44.263704  459741 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 19:41:44.311783  459741 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 19:41:44.311838  459741 out.go:239] * 
	W0717 19:41:44.311948  459741 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.311982  459741 out.go:239] * 
	W0717 19:41:44.313153  459741 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:41:44.316845  459741 out.go:177] 
	W0717 19:41:44.318001  459741 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 19:41:44.318059  459741 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 19:41:44.318087  459741 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 19:41:44.319471  459741 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.913472380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245951913450031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=855da673-17f0-412f-be3b-8123f3acc2cd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.914224221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6661054c-0cc1-4012-9569-6abb03e090b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.914276499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6661054c-0cc1-4012-9569-6abb03e090b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.914306717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6661054c-0cc1-4012-9569-6abb03e090b6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.951925942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72ccd70c-cce1-41ea-b931-9a4208f0dd44 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.952097204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72ccd70c-cce1-41ea-b931-9a4208f0dd44 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.953645945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f275bd1-ae39-4ce9-871b-ccb9fc38cd81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.954067160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245951954044936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f275bd1-ae39-4ce9-871b-ccb9fc38cd81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.954737431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9f5f8b2-cbfc-42c6-a2ba-24f62fba38e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.954823933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9f5f8b2-cbfc-42c6-a2ba-24f62fba38e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.954871695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9f5f8b2-cbfc-42c6-a2ba-24f62fba38e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.991895158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc4327c9-131e-4431-b499-2ea609a7cdf8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.992087995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc4327c9-131e-4431-b499-2ea609a7cdf8 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.993227613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=942bb830-7bf7-4f62-a43a-896e77471b1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.993620828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245951993598130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=942bb830-7bf7-4f62-a43a-896e77471b1c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.994102734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5549fada-3ca7-4343-8369-7a001e9ab99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.994195918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5549fada-3ca7-4343-8369-7a001e9ab99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:31 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:31.994233334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5549fada-3ca7-4343-8369-7a001e9ab99f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.026257329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d2764e4-473e-4659-8417-70875d8ba51c name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.026355963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d2764e4-473e-4659-8417-70875d8ba51c name=/runtime.v1.RuntimeService/Version
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.027711584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=236388f5-2b88-4e12-823e-56f6b4057337 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.028201445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721245952028173640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=236388f5-2b88-4e12-823e-56f6b4057337 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.028726104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2684684d-732a-4e0c-a432-33d039c4ccea name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.028800068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2684684d-732a-4e0c-a432-33d039c4ccea name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:52:32 old-k8s-version-998147 crio[650]: time="2024-07-17 19:52:32.028836517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2684684d-732a-4e0c-a432-33d039c4ccea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 19:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052125] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045822] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.749399] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.651884] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.750489] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.317708] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.064289] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056621] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.217924] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.129076] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.259232] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +6.636882] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.063978] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.692971] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[ +13.037868] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 19:37] systemd-fstab-generator[5048]: Ignoring "noauto" option for root device
	[Jul17 19:39] systemd-fstab-generator[5324]: Ignoring "noauto" option for root device
	[  +0.060287] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:52:32 up 19 min,  0 users,  load average: 0.00, 0.02, 0.02
	Linux old-k8s-version-998147 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc0008c8870)
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: goroutine 158 [select]:
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009a3ef0, 0x4f0ac20, 0xc000b0a050, 0x1, 0xc0001020c0)
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d0700, 0xc0001020c0)
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0006dea30, 0xc000888f80)
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 17 19:52:27 old-k8s-version-998147 kubelet[6748]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 19:52:27 old-k8s-version-998147 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 19:52:27 old-k8s-version-998147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 19:52:28 old-k8s-version-998147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 132.
	Jul 17 19:52:28 old-k8s-version-998147 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 19:52:28 old-k8s-version-998147 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 19:52:28 old-k8s-version-998147 kubelet[6757]: I0717 19:52:28.483902    6757 server.go:416] Version: v1.20.0
	Jul 17 19:52:28 old-k8s-version-998147 kubelet[6757]: I0717 19:52:28.484365    6757 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 19:52:28 old-k8s-version-998147 kubelet[6757]: I0717 19:52:28.487215    6757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 19:52:28 old-k8s-version-998147 kubelet[6757]: W0717 19:52:28.488230    6757 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 19:52:28 old-k8s-version-998147 kubelet[6757]: I0717 19:52:28.488280    6757 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 2 (229.076992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-998147" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (101.65s)

                                                
                                    

Test pass (257/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 50.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 14.02
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 50.4
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 124.32
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 172.89
38 TestAddons/parallel/Registry 19.82
40 TestAddons/parallel/InspektorGadget 10.81
42 TestAddons/parallel/HelmTiller 11.55
44 TestAddons/parallel/CSI 69.64
45 TestAddons/parallel/Headlamp 13.93
46 TestAddons/parallel/CloudSpanner 5.71
47 TestAddons/parallel/LocalPath 62.34
48 TestAddons/parallel/NvidiaDevicePlugin 6.68
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 83.13
56 TestCertExpiration 270.54
58 TestForceSystemdFlag 69.67
59 TestForceSystemdEnv 46.68
61 TestKVMDriverInstallOrUpdate 7.27
65 TestErrorSpam/setup 38.96
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.58
69 TestErrorSpam/unpause 1.59
70 TestErrorSpam/stop 5.09
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 94.96
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.97
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
82 TestFunctional/serial/CacheCmd/cache/add_local 2.17
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 34.24
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.48
93 TestFunctional/serial/LogsFileCmd 1.46
94 TestFunctional/serial/InvalidService 4.99
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 14.58
98 TestFunctional/parallel/DryRun 0.31
99 TestFunctional/parallel/InternationalLanguage 0.21
100 TestFunctional/parallel/StatusCmd 1.1
104 TestFunctional/parallel/ServiceCmdConnect 11.6
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 48.69
108 TestFunctional/parallel/SSHCmd 0.42
109 TestFunctional/parallel/CpCmd 1.32
110 TestFunctional/parallel/MySQL 33.76
111 TestFunctional/parallel/FileSync 0.21
112 TestFunctional/parallel/CertSync 1.34
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
120 TestFunctional/parallel/License 0.7
130 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
132 TestFunctional/parallel/ProfileCmd/profile_list 0.33
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
134 TestFunctional/parallel/MountCmd/any-port 8.84
135 TestFunctional/parallel/MountCmd/specific-port 2.08
136 TestFunctional/parallel/ServiceCmd/List 0.31
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
139 TestFunctional/parallel/ServiceCmd/Format 0.47
140 TestFunctional/parallel/ServiceCmd/URL 0.49
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.47
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
148 TestFunctional/parallel/ImageCommands/ImageBuild 3.64
149 TestFunctional/parallel/ImageCommands/Setup 1.96
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.69
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.9
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.64
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.65
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 210.78
167 TestMultiControlPlane/serial/DeployApp 6.26
168 TestMultiControlPlane/serial/PingHostFromPods 1.2
169 TestMultiControlPlane/serial/AddWorkerNode 57.26
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
172 TestMultiControlPlane/serial/CopyFile 12.83
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.16
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 331.79
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 74.89
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
188 TestJSONOutput/start/Command 55.3
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.71
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.64
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.37
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
216 TestMainNoArgs 0.05
217 TestMinikubeProfile 86.9
220 TestMountStart/serial/StartWithMountFirst 30.88
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 30.16
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.69
225 TestMountStart/serial/VerifyMountPostDelete 0.38
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 21.15
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 117.14
232 TestMultiNode/serial/DeployApp2Nodes 5.75
233 TestMultiNode/serial/PingHostFrom2Pods 0.8
234 TestMultiNode/serial/AddNode 52.13
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.2
238 TestMultiNode/serial/StopNode 2.28
239 TestMultiNode/serial/StartAfterStop 39.3
241 TestMultiNode/serial/DeleteNode 2.34
243 TestMultiNode/serial/RestartMultiNode 183.43
244 TestMultiNode/serial/ValidateNameConflict 45.57
251 TestScheduledStopUnix 115.17
255 TestRunningBinaryUpgrade 234.01
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 95.78
262 TestNoKubernetes/serial/StartWithStopK8s 8.17
263 TestNoKubernetes/serial/Start 27.51
264 TestStoppedBinaryUpgrade/Setup 2.52
265 TestStoppedBinaryUpgrade/Upgrade 120.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
267 TestNoKubernetes/serial/ProfileList 1.12
268 TestNoKubernetes/serial/Stop 1.3
269 TestNoKubernetes/serial/StartNoArgs 40.19
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
278 TestNetworkPlugins/group/false 3.63
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
291 TestPause/serial/Start 120.96
292 TestNetworkPlugins/group/auto/Start 120.9
293 TestPause/serial/SecondStartNoReconfiguration 38.75
294 TestNetworkPlugins/group/calico/Start 87.9
295 TestNetworkPlugins/group/auto/KubeletFlags 0.23
296 TestNetworkPlugins/group/auto/NetCatPod 11.27
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.15
299 TestNetworkPlugins/group/auto/HairPin 0.15
300 TestPause/serial/Pause 1.17
301 TestPause/serial/VerifyStatus 0.28
302 TestPause/serial/Unpause 1.2
303 TestPause/serial/PauseAgain 1.38
304 TestPause/serial/DeletePaused 1.21
305 TestPause/serial/VerifyDeletedResources 4.15
306 TestNetworkPlugins/group/custom-flannel/Start 80.28
307 TestNetworkPlugins/group/kindnet/Start 99.35
308 TestNetworkPlugins/group/flannel/Start 112.93
309 TestNetworkPlugins/group/calico/ControllerPod 6.02
310 TestNetworkPlugins/group/calico/KubeletFlags 0.37
311 TestNetworkPlugins/group/calico/NetCatPod 14.36
312 TestNetworkPlugins/group/calico/DNS 0.18
313 TestNetworkPlugins/group/calico/Localhost 0.15
314 TestNetworkPlugins/group/calico/HairPin 0.14
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.34
317 TestNetworkPlugins/group/enable-default-cni/Start 66.25
318 TestNetworkPlugins/group/custom-flannel/DNS 0.19
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
321 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
323 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
324 TestNetworkPlugins/group/bridge/Start 107.5
325 TestNetworkPlugins/group/kindnet/DNS 0.22
326 TestNetworkPlugins/group/kindnet/Localhost 0.16
327 TestNetworkPlugins/group/kindnet/HairPin 0.21
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
332 TestNetworkPlugins/group/flannel/NetCatPod 13.24
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
335 TestNetworkPlugins/group/flannel/DNS 0.28
336 TestNetworkPlugins/group/flannel/Localhost 0.31
337 TestNetworkPlugins/group/flannel/HairPin 0.16
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
342 TestStartStop/group/no-preload/serial/FirstStart 130.23
344 TestStartStop/group/embed-certs/serial/FirstStart 115.78
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
346 TestNetworkPlugins/group/bridge/NetCatPod 9.25
347 TestNetworkPlugins/group/bridge/DNS 0.16
348 TestNetworkPlugins/group/bridge/Localhost 0.15
349 TestNetworkPlugins/group/bridge/HairPin 0.15
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.57
352 TestStartStop/group/embed-certs/serial/DeployApp 9.32
353 TestStartStop/group/no-preload/serial/DeployApp 10.29
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
365 TestStartStop/group/embed-certs/serial/SecondStart 680.46
366 TestStartStop/group/no-preload/serial/SecondStart 582.21
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 599.27
369 TestStartStop/group/old-k8s-version/serial/Stop 3.46
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/FirstStart 47.46
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
384 TestStartStop/group/newest-cni/serial/Stop 7.36
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
386 TestStartStop/group/newest-cni/serial/SecondStart 38.27
387 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/newest-cni/serial/Pause 2.61
x
+
TestDownloadOnly/v1.20.0/json-events (50.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-013846 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-013846 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (50.079720767s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (50.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-013846
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-013846: exit status 85 (60.617101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:02 UTC |          |
	|         | -p download-only-013846        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:02:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:02:23.758940  400183 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:02:23.759189  400183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:02:23.759197  400183 out.go:304] Setting ErrFile to fd 2...
	I0717 18:02:23.759202  400183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:02:23.759383  400183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	W0717 18:02:23.759492  400183 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19282-392903/.minikube/config/config.json: open /home/jenkins/minikube-integration/19282-392903/.minikube/config/config.json: no such file or directory
	I0717 18:02:23.760052  400183 out.go:298] Setting JSON to true
	I0717 18:02:23.761036  400183 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6287,"bootTime":1721233057,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:02:23.761100  400183 start.go:139] virtualization: kvm guest
	I0717 18:02:23.763559  400183 out.go:97] [download-only-013846] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 18:02:23.763670  400183 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 18:02:23.763767  400183 notify.go:220] Checking for updates...
	I0717 18:02:23.765009  400183 out.go:169] MINIKUBE_LOCATION=19282
	I0717 18:02:23.766418  400183 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:02:23.767858  400183 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:02:23.769288  400183 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:02:23.770659  400183 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 18:02:23.773144  400183 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 18:02:23.773454  400183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:02:23.805481  400183 out.go:97] Using the kvm2 driver based on user configuration
	I0717 18:02:23.805512  400183 start.go:297] selected driver: kvm2
	I0717 18:02:23.805523  400183 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:02:23.805860  400183 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:02:23.805952  400183 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:02:23.821033  400183 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:02:23.821091  400183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:02:23.821538  400183 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 18:02:23.821719  400183 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:02:23.821749  400183 cni.go:84] Creating CNI manager for ""
	I0717 18:02:23.821757  400183 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:02:23.821765  400183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:02:23.821813  400183 start.go:340] cluster config:
	{Name:download-only-013846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-013846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:02:23.821974  400183 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:02:23.823534  400183 out.go:97] Downloading VM boot image ...
	I0717 18:02:23.823565  400183 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:02:34.364138  400183 out.go:97] Starting "download-only-013846" primary control-plane node in "download-only-013846" cluster
	I0717 18:02:34.364171  400183 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:02:34.482666  400183 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:02:34.482730  400183 cache.go:56] Caching tarball of preloaded images
	I0717 18:02:34.482952  400183 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:02:34.485060  400183 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 18:02:34.485077  400183 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:02:34.607698  400183 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:02:47.524894  400183 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:02:47.524996  400183 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:02:48.443571  400183 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 18:02:48.443925  400183 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/download-only-013846/config.json ...
	I0717 18:02:48.443960  400183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/download-only-013846/config.json: {Name:mk96288aaf0cd815fd9aa988c1da84a2f83b497b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:02:48.444150  400183 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:02:48.444324  400183 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-013846 host does not exist
	  To start a cluster, run: "minikube start -p download-only-013846"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-013846
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (14.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-669228 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-669228 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.023111387s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (14.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-669228
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-669228: exit status 85 (60.617178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:02 UTC |                     |
	|         | -p download-only-013846        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| delete  | -p download-only-013846        | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| start   | -o=json --download-only        | download-only-669228 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC |                     |
	|         | -p download-only-669228        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:03:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:03:14.160943  400528 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:03:14.161349  400528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:14.161364  400528 out.go:304] Setting ErrFile to fd 2...
	I0717 18:03:14.161388  400528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:14.161687  400528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:03:14.162251  400528 out.go:298] Setting JSON to true
	I0717 18:03:14.163235  400528 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6337,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:03:14.163292  400528 start.go:139] virtualization: kvm guest
	I0717 18:03:14.165600  400528 out.go:97] [download-only-669228] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:03:14.165749  400528 notify.go:220] Checking for updates...
	I0717 18:03:14.167263  400528 out.go:169] MINIKUBE_LOCATION=19282
	I0717 18:03:14.168925  400528 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:03:14.170497  400528 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:03:14.172049  400528 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:03:14.173500  400528 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 18:03:14.175886  400528 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 18:03:14.176167  400528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:03:14.207977  400528 out.go:97] Using the kvm2 driver based on user configuration
	I0717 18:03:14.208009  400528 start.go:297] selected driver: kvm2
	I0717 18:03:14.208014  400528 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:03:14.208300  400528 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:14.208389  400528 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:03:14.223999  400528 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:03:14.224071  400528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:03:14.224731  400528 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 18:03:14.224924  400528 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:03:14.225005  400528 cni.go:84] Creating CNI manager for ""
	I0717 18:03:14.225020  400528 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:03:14.225034  400528 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:03:14.225135  400528 start.go:340] cluster config:
	{Name:download-only-669228 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-669228 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:14.225266  400528 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:14.226919  400528 out.go:97] Starting "download-only-669228" primary control-plane node in "download-only-669228" cluster
	I0717 18:03:14.226941  400528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:03:14.876819  400528 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:03:14.876859  400528 cache.go:56] Caching tarball of preloaded images
	I0717 18:03:14.877079  400528 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:03:14.879288  400528 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 18:03:14.879315  400528 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:03:14.998697  400528 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-669228 host does not exist
	  To start a cluster, run: "minikube start -p download-only-669228"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-669228
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (50.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-188993 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-188993 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (50.398519715s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (50.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-188993
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-188993: exit status 85 (64.992617ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:02 UTC |                     |
	|         | -p download-only-013846             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| delete  | -p download-only-013846             | download-only-013846 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| start   | -o=json --download-only             | download-only-669228 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC |                     |
	|         | -p download-only-669228             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| delete  | -p download-only-669228             | download-only-669228 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:03 UTC |
	| start   | -o=json --download-only             | download-only-188993 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC |                     |
	|         | -p download-only-188993             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:03:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:03:28.502499  400751 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:03:28.502749  400751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:28.502762  400751 out.go:304] Setting ErrFile to fd 2...
	I0717 18:03:28.502770  400751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:28.502971  400751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:03:28.503517  400751 out.go:298] Setting JSON to true
	I0717 18:03:28.504459  400751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6351,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:03:28.504544  400751 start.go:139] virtualization: kvm guest
	I0717 18:03:28.506609  400751 out.go:97] [download-only-188993] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:03:28.506775  400751 notify.go:220] Checking for updates...
	I0717 18:03:28.508047  400751 out.go:169] MINIKUBE_LOCATION=19282
	I0717 18:03:28.509424  400751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:03:28.510688  400751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:03:28.511862  400751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:03:28.513022  400751 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 18:03:28.515149  400751 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 18:03:28.515333  400751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:03:28.547370  400751 out.go:97] Using the kvm2 driver based on user configuration
	I0717 18:03:28.547393  400751 start.go:297] selected driver: kvm2
	I0717 18:03:28.547406  400751 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:03:28.547738  400751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:28.547808  400751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19282-392903/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:03:28.563216  400751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:03:28.563284  400751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:03:28.563808  400751 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 18:03:28.563956  400751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:03:28.563982  400751 cni.go:84] Creating CNI manager for ""
	I0717 18:03:28.563993  400751 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:03:28.564003  400751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:03:28.564070  400751 start.go:340] cluster config:
	{Name:download-only-188993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-188993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:28.564164  400751 iso.go:125] acquiring lock: {Name:mk538e17966376fb8d1586bc9fef119ddb755e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:28.565820  400751 out.go:97] Starting "download-only-188993" primary control-plane node in "download-only-188993" cluster
	I0717 18:03:28.565838  400751 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:03:29.070555  400751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:03:29.070592  400751 cache.go:56] Caching tarball of preloaded images
	I0717 18:03:29.070784  400751 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:03:29.072670  400751 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 18:03:29.072697  400751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:03:29.201047  400751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:03:39.675603  400751 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:03:39.675704  400751 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19282-392903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:03:40.412970  400751 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 18:03:40.413326  400751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/download-only-188993/config.json ...
	I0717 18:03:40.413358  400751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/download-only-188993/config.json: {Name:mk04da102572e1c5cbd6a689e8bf3d1eab84acd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:03:40.413541  400751 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:03:40.413685  400751 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19282-392903/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-188993 host does not exist
	  To start a cluster, run: "minikube start -p download-only-188993"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-188993
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-742633 --alsologtostderr --binary-mirror http://127.0.0.1:38237 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-742633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-742633
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (124.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-175956 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-175956 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m3.341673611s)
helpers_test.go:175: Cleaning up "offline-crio-175956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-175956
--- PASS: TestOffline (124.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-453453
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-453453: exit status 85 (55.141062ms)

                                                
                                                
-- stdout --
	* Profile "addons-453453" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-453453"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-453453
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-453453: exit status 85 (53.737934ms)

                                                
                                                
-- stdout --
	* Profile "addons-453453" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-453453"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (172.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-453453 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-453453 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m52.886869752s)
--- PASS: TestAddons/Setup (172.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.068695ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-mdcds" [2aea3a0e-bf77-437f-ada1-99cf0afc991d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006355114s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bvkbp" [ee546b39-8d72-4a83-b1f0-5d08d5ba2998] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005242348s
addons_test.go:342: (dbg) Run:  kubectl --context addons-453453 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-453453 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-453453 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.750116086s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dd2tz" [acb81455-b484-4108-8fdb-644434e32bd9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005573737s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-453453
2024/07/17 18:07:32 [DEBUG] GET http://192.168.39.136:5000
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-453453: (5.798592857s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.977812ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-g4wtr" [05df6af2-4add-4e71-b8e0-eb055c2f28cc] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005473137s
addons_test.go:475: (dbg) Run:  kubectl --context addons-453453 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-453453 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.763311936s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.147428ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-453453 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-453453 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [70182e5d-4860-4468-b3c4-8cfe38bb5208] Pending
helpers_test.go:344: "task-pv-pod" [70182e5d-4860-4468-b3c4-8cfe38bb5208] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [70182e5d-4860-4468-b3c4-8cfe38bb5208] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003421414s
addons_test.go:586: (dbg) Run:  kubectl --context addons-453453 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-453453 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-453453 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-453453 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-453453 delete pod task-pv-pod: (1.163134747s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-453453 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-453453 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-453453 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d8d8d775-ebb1-49e1-ab0f-c444ed5d0f0f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d8d8d775-ebb1-49e1-ab0f-c444ed5d0f0f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d8d8d775-ebb1-49e1-ab0f-c444ed5d0f0f] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004116077s
addons_test.go:628: (dbg) Run:  kubectl --context addons-453453 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-453453 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-453453 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.740766024s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-453453 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-29grz" [b89e8f1b-24a4-46f3-b300-72f6c803f7d6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-29grz" [b89e8f1b-24a4-46f3-b300-72f6c803f7d6] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004799562s
--- PASS: TestAddons/parallel/Headlamp (13.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-jtcdk" [73e9927f-22a3-480c-9d99-8d35a1aa429b] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003480038s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-453453
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-453453 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-453453 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd828895-b0aa-491b-a16f-8619cb54a6ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd828895-b0aa-491b-a16f-8619cb54a6ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd828895-b0aa-491b-a16f-8619cb54a6ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.004392008s
addons_test.go:992: (dbg) Run:  kubectl --context addons-453453 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 ssh "cat /opt/local-path-provisioner/pvc-78518099-7f58-4e6b-b950-2bfc9e8ecd09_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-453453 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-453453 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-453453 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-453453 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.370272218s)
--- PASS: TestAddons/parallel/LocalPath (62.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h5kz7" [b8017821-48d3-427f-87a1-64e210b8ca26] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007582006s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-453453
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-rzt74" [d4ba3b29-c2ab-4ed4-894d-9fcca9d6eaca] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005409843s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-453453 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-453453 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (83.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-597798 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-597798 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m21.441405413s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-597798 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-597798 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-597798 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-597798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-597798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-597798: (1.231694313s)
--- PASS: TestCertOptions (83.13s)

                                                
                                    
x
+
TestCertExpiration (270.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-012081 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-012081 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (48.84154082s)
E0717 19:17:13.091363  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-012081 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-012081 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.99042812s)
helpers_test.go:175: Cleaning up "cert-expiration-012081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-012081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-012081: (1.708678534s)
--- PASS: TestCertExpiration (270.54s)

                                                
                                    
x
+
TestForceSystemdFlag (69.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-919742 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-919742 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.69675656s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-919742 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-919742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-919742
--- PASS: TestForceSystemdFlag (69.67s)

                                                
                                    
x
+
TestForceSystemdEnv (46.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-653776 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-653776 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.867959413s)
helpers_test.go:175: Cleaning up "force-systemd-env-653776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-653776
--- PASS: TestForceSystemdEnv (46.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.27s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.27s)

                                                
                                    
x
+
TestErrorSpam/setup (38.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-278469 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278469 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-278469 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-278469 --driver=kvm2  --container-runtime=crio: (38.961213346s)
--- PASS: TestErrorSpam/setup (38.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (5.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop: (2.296371831s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop: (1.113937308s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-278469 --log_dir /tmp/nospam-278469 stop: (1.676657764s)
--- PASS: TestErrorSpam/stop (5.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19282-392903/.minikube/files/etc/test/nested/copy/400171/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (94.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0717 18:17:13.094278  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.100025  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.110279  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.130528  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.170792  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.251164  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.411635  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:13.732368  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:14.373404  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:15.653952  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:18.215728  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:23.336361  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:33.576897  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:17:54.057956  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:18:35.019258  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-291239 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m34.958464562s)
--- PASS: TestFunctional/serial/StartWithProxy (94.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-291239 --alsologtostderr -v=8: (39.967813838s)
functional_test.go:659: soft start took 39.968356659s for "functional-291239" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-291239 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 cache add registry.k8s.io/pause:3.3: (1.096146626s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 cache add registry.k8s.io/pause:latest: (1.030262302s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-291239 /tmp/TestFunctionalserialCacheCmdcacheadd_local1486511274/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache add minikube-local-cache-test:functional-291239
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 cache add minikube-local-cache-test:functional-291239: (1.873035909s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache delete minikube-local-cache-test:functional-291239
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-291239
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.581957ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 kubectl -- --context functional-291239 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-291239 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 18:19:56.940800  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-291239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.240296379s)
functional_test.go:757: restart took 34.240439825s for "functional-291239" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-291239 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 logs: (1.481905904s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 logs --file /tmp/TestFunctionalserialLogsFileCmd3643851558/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 logs --file /tmp/TestFunctionalserialLogsFileCmd3643851558/001/logs.txt: (1.463103192s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-291239 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-291239
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-291239: exit status 115 (290.335663ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.137:31781 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-291239 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-291239 delete -f testdata/invalidsvc.yaml: (1.502837311s)
--- PASS: TestFunctional/serial/InvalidService (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 config get cpus: exit status 14 (55.626692ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 config get cpus: exit status 14 (51.495852ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-291239 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-291239 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 410553: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-291239 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.147655ms)

                                                
                                                
-- stdout --
	* [functional-291239] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:20:19.269361  410124 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:20:19.269496  410124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:19.269506  410124 out.go:304] Setting ErrFile to fd 2...
	I0717 18:20:19.269510  410124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:19.269721  410124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:20:19.270263  410124 out.go:298] Setting JSON to false
	I0717 18:20:19.271442  410124 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7362,"bootTime":1721233057,"procs":258,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:20:19.271509  410124 start.go:139] virtualization: kvm guest
	I0717 18:20:19.273754  410124 out.go:177] * [functional-291239] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:20:19.275284  410124 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:20:19.275292  410124 notify.go:220] Checking for updates...
	I0717 18:20:19.278323  410124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:20:19.279918  410124 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:20:19.281477  410124 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:19.283699  410124 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:20:19.285321  410124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:20:19.287398  410124 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:20:19.287960  410124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:19.288054  410124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:19.305518  410124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I0717 18:20:19.305933  410124 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:19.306521  410124 main.go:141] libmachine: Using API Version  1
	I0717 18:20:19.306546  410124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:19.306892  410124 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:19.307082  410124 main.go:141] libmachine: (functional-291239) Calling .DriverName
	I0717 18:20:19.307311  410124 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:20:19.307600  410124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:19.307648  410124 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:19.324131  410124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39263
	I0717 18:20:19.324566  410124 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:19.325082  410124 main.go:141] libmachine: Using API Version  1
	I0717 18:20:19.325107  410124 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:19.325539  410124 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:19.325762  410124 main.go:141] libmachine: (functional-291239) Calling .DriverName
	I0717 18:20:19.361108  410124 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:20:19.362551  410124 start.go:297] selected driver: kvm2
	I0717 18:20:19.362585  410124 start.go:901] validating driver "kvm2" against &{Name:functional-291239 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-291239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:20:19.362747  410124 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:20:19.365242  410124 out.go:177] 
	W0717 18:20:19.366513  410124 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 18:20:19.367781  410124 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-291239 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-291239 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (204.806523ms)

                                                
                                                
-- stdout --
	* [functional-291239] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:20:19.049079  410012 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:20:19.049205  410012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:19.049216  410012 out.go:304] Setting ErrFile to fd 2...
	I0717 18:20:19.049221  410012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:20:19.049726  410012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:20:19.050458  410012 out.go:298] Setting JSON to false
	I0717 18:20:19.051914  410012 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7362,"bootTime":1721233057,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:20:19.052004  410012 start.go:139] virtualization: kvm guest
	I0717 18:20:19.054212  410012 out.go:177] * [functional-291239] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 18:20:19.055591  410012 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 18:20:19.055651  410012 notify.go:220] Checking for updates...
	I0717 18:20:19.058795  410012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:20:19.060218  410012 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 18:20:19.061503  410012 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 18:20:19.062834  410012 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:20:19.064123  410012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:20:19.065872  410012 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:20:19.066511  410012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:19.066567  410012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:19.096614  410012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0717 18:20:19.097302  410012 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:19.097959  410012 main.go:141] libmachine: Using API Version  1
	I0717 18:20:19.097974  410012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:19.098364  410012 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:19.098506  410012 main.go:141] libmachine: (functional-291239) Calling .DriverName
	I0717 18:20:19.098702  410012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:20:19.099097  410012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:20:19.099131  410012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:20:19.135062  410012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0717 18:20:19.135625  410012 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:20:19.136266  410012 main.go:141] libmachine: Using API Version  1
	I0717 18:20:19.136289  410012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:20:19.136726  410012 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:20:19.136883  410012 main.go:141] libmachine: (functional-291239) Calling .DriverName
	I0717 18:20:19.190738  410012 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 18:20:19.192088  410012 start.go:297] selected driver: kvm2
	I0717 18:20:19.192114  410012 start.go:901] validating driver "kvm2" against &{Name:functional-291239 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721234491-19282@sha256:af477ffa9f6167a73f0adae71d3a4e601ba0c2adc97a4067255b422b3477d2c2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-291239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:20:19.192246  410012 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:20:19.194598  410012 out.go:177] 
	W0717 18:20:19.195880  410012 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 18:20:19.197120  410012 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-291239 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-291239 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-bfmjx" [bb11076b-bb37-4c09-bd61-9b471bec8965] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-bfmjx" [bb11076b-bb37-4c09-bd61-9b471bec8965] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004250116s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.137:31709
functional_test.go:1671: http://192.168.39.137:31709: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-bfmjx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.137:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.137:31709
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [35cd0309-5d51-46e6-8fb6-688639fadd1d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004658733s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-291239 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-291239 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-291239 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-291239 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [84340ba1-54c2-4f4a-8893-a4086ce78c3b] Pending
helpers_test.go:344: "sp-pod" [84340ba1-54c2-4f4a-8893-a4086ce78c3b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [84340ba1-54c2-4f4a-8893-a4086ce78c3b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004940978s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-291239 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-291239 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-291239 delete -f testdata/storage-provisioner/pod.yaml: (2.863025593s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-291239 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [764642a6-8bc7-442c-91f9-1942ac6b0dd8] Pending
helpers_test.go:344: "sp-pod" [764642a6-8bc7-442c-91f9-1942ac6b0dd8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [764642a6-8bc7-442c-91f9-1942ac6b0dd8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.005201258s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-291239 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh -n functional-291239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cp functional-291239:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd365607670/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh -n functional-291239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh -n functional-291239 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-291239 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-skxjf" [e8a8587b-3159-4bfe-8dce-9364b8deb426] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-skxjf" [e8a8587b-3159-4bfe-8dce-9364b8deb426] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.004110108s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-291239 exec mysql-64454c8b5c-skxjf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-291239 exec mysql-64454c8b5c-skxjf -- mysql -ppassword -e "show databases;": exit status 1 (130.180801ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-291239 exec mysql-64454c8b5c-skxjf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-291239 exec mysql-64454c8b5c-skxjf -- mysql -ppassword -e "show databases;": exit status 1 (138.305589ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-291239 exec mysql-64454c8b5c-skxjf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (33.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/400171/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /etc/test/nested/copy/400171/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/400171.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /etc/ssl/certs/400171.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/400171.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /usr/share/ca-certificates/400171.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4001712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /etc/ssl/certs/4001712.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4001712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /usr/share/ca-certificates/4001712.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-291239 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "sudo systemctl is-active docker": exit status 1 (237.966774ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "sudo systemctl is-active containerd": exit status 1 (218.405594ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-291239 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-291239 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-wcjrm" [f9a4c21f-211a-40aa-b2ce-dba1a1debaa1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-wcjrm" [f9a4c21f-211a-40aa-b2ce-dba1a1debaa1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004101457s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "275.630502ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "50.823191ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "285.911949ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "52.803424ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdany-port2742559877/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721240408145714506" to /tmp/TestFunctionalparallelMountCmdany-port2742559877/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721240408145714506" to /tmp/TestFunctionalparallelMountCmdany-port2742559877/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721240408145714506" to /tmp/TestFunctionalparallelMountCmdany-port2742559877/001/test-1721240408145714506
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.83861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 18:20 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 18:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 18:20 test-1721240408145714506
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh cat /mount-9p/test-1721240408145714506
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-291239 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dba94612-bfb7-42a4-b5d3-0c8dad9ea237] Pending
helpers_test.go:344: "busybox-mount" [dba94612-bfb7-42a4-b5d3-0c8dad9ea237] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dba94612-bfb7-42a4-b5d3-0c8dad9ea237] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dba94612-bfb7-42a4-b5d3-0c8dad9ea237] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004783058s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-291239 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdany-port2742559877/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdspecific-port1636956272/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.991497ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdspecific-port1636956272/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "sudo umount -f /mount-9p": exit status 1 (210.496451ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-291239 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdspecific-port1636956272/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service list -o json
functional_test.go:1490: Took "370.28897ms" to run "out/minikube-linux-amd64 -p functional-291239 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.137:30274
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.137:30274
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T" /mount1: exit status 1 (327.005596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-291239 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-291239 /tmp/TestFunctionalparallelMountCmdVerifyCleanup809932812/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-291239 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-291239
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-291239
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-291239 image ls --format short --alsologtostderr:
I0717 18:20:31.930084  411205 out.go:291] Setting OutFile to fd 1 ...
I0717 18:20:31.930316  411205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:31.930334  411205 out.go:304] Setting ErrFile to fd 2...
I0717 18:20:31.930341  411205 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:31.930521  411205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
I0717 18:20:31.931106  411205 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:31.931215  411205 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:31.931624  411205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:31.931667  411205 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:31.946845  411205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
I0717 18:20:31.947353  411205 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:31.948044  411205 main.go:141] libmachine: Using API Version  1
I0717 18:20:31.948077  411205 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:31.948443  411205 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:31.948660  411205 main.go:141] libmachine: (functional-291239) Calling .GetState
I0717 18:20:31.950636  411205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:31.950684  411205 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:31.966074  411205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
I0717 18:20:31.966477  411205 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:31.967051  411205 main.go:141] libmachine: Using API Version  1
I0717 18:20:31.967089  411205 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:31.967417  411205 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:31.967680  411205 main.go:141] libmachine: (functional-291239) Calling .DriverName
I0717 18:20:31.967938  411205 ssh_runner.go:195] Run: systemctl --version
I0717 18:20:31.967967  411205 main.go:141] libmachine: (functional-291239) Calling .GetSSHHostname
I0717 18:20:31.970521  411205 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:31.970852  411205 main.go:141] libmachine: (functional-291239) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:bb:9f", ip: ""} in network mk-functional-291239: {Iface:virbr1 ExpiryTime:2024-07-17 19:17:15 +0000 UTC Type:0 Mac:52:54:00:19:bb:9f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-291239 Clientid:01:52:54:00:19:bb:9f}
I0717 18:20:31.970893  411205 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined IP address 192.168.39.137 and MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:31.970971  411205 main.go:141] libmachine: (functional-291239) Calling .GetSSHPort
I0717 18:20:31.971123  411205 main.go:141] libmachine: (functional-291239) Calling .GetSSHKeyPath
I0717 18:20:31.971243  411205 main.go:141] libmachine: (functional-291239) Calling .GetSSHUsername
I0717 18:20:31.971365  411205 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/functional-291239/id_rsa Username:docker}
I0717 18:20:32.051337  411205 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:20:32.087196  411205 main.go:141] libmachine: Making call to close driver server
I0717 18:20:32.087215  411205 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:32.087546  411205 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:32.087572  411205 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:32.087583  411205 main.go:141] libmachine: Making call to close driver server
I0717 18:20:32.087596  411205 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:32.087861  411205 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:32.087923  411205 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:32.087950  411205 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-291239 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kicbase/echo-server           | functional-291239  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-291239  | 6f7a6fb6d704e | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-291239 image ls --format table --alsologtostderr:
I0717 18:20:34.368776  411348 out.go:291] Setting OutFile to fd 1 ...
I0717 18:20:34.368896  411348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:34.368906  411348 out.go:304] Setting ErrFile to fd 2...
I0717 18:20:34.368910  411348 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:34.369068  411348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
I0717 18:20:34.369678  411348 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:34.369782  411348 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:34.370144  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:34.370192  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:34.385053  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
I0717 18:20:34.385557  411348 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:34.386135  411348 main.go:141] libmachine: Using API Version  1
I0717 18:20:34.386158  411348 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:34.386459  411348 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:34.386662  411348 main.go:141] libmachine: (functional-291239) Calling .GetState
I0717 18:20:34.388444  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:34.388479  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:34.404722  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
I0717 18:20:34.405232  411348 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:34.405861  411348 main.go:141] libmachine: Using API Version  1
I0717 18:20:34.405897  411348 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:34.406278  411348 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:34.406488  411348 main.go:141] libmachine: (functional-291239) Calling .DriverName
I0717 18:20:34.406794  411348 ssh_runner.go:195] Run: systemctl --version
I0717 18:20:34.406851  411348 main.go:141] libmachine: (functional-291239) Calling .GetSSHHostname
I0717 18:20:34.409762  411348 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:34.410171  411348 main.go:141] libmachine: (functional-291239) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:bb:9f", ip: ""} in network mk-functional-291239: {Iface:virbr1 ExpiryTime:2024-07-17 19:17:15 +0000 UTC Type:0 Mac:52:54:00:19:bb:9f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-291239 Clientid:01:52:54:00:19:bb:9f}
I0717 18:20:34.410193  411348 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined IP address 192.168.39.137 and MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:34.410368  411348 main.go:141] libmachine: (functional-291239) Calling .GetSSHPort
I0717 18:20:34.410558  411348 main.go:141] libmachine: (functional-291239) Calling .GetSSHKeyPath
I0717 18:20:34.410749  411348 main.go:141] libmachine: (functional-291239) Calling .GetSSHUsername
I0717 18:20:34.410898  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/functional-291239/id_rsa Username:docker}
I0717 18:20:34.495990  411348 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:20:34.548923  411348 main.go:141] libmachine: Making call to close driver server
I0717 18:20:34.548940  411348 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:34.549225  411348 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:34.549240  411348 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:34.549275  411348 main.go:141] libmachine: Making call to close driver server
I0717 18:20:34.549292  411348 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:34.549335  411348 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:34.549525  411348 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:34.549525  411348 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:34.549612  411348 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-291239 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"
id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-291239"],"size":"4943877"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650
f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6f7a6fb6d704ea2197145afb7162687790693472f59ed814d5c7b34c392c3ee4","repoDigests":["localhost/mi
nikube-local-cache-test@sha256:84a50d074fd114ad6ea21eb459dde756341a13237c98f0d3377fb675df8bdd47"],"repoTags":["localhost/minikube-local-cache-test:functional-291239"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66
a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed96
1","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:
15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-291239 image ls --format json --alsologtostderr:
I0717 18:20:34.157418  411308 out.go:291] Setting OutFile to fd 1 ...
I0717 18:20:34.157752  411308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:34.157767  411308 out.go:304] Setting ErrFile to fd 2...
I0717 18:20:34.157774  411308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:34.158015  411308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
I0717 18:20:34.158717  411308 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:34.158860  411308 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:34.159302  411308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:34.159356  411308 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:34.174511  411308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40343
I0717 18:20:34.175025  411308 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:34.175673  411308 main.go:141] libmachine: Using API Version  1
I0717 18:20:34.175698  411308 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:34.176061  411308 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:34.176283  411308 main.go:141] libmachine: (functional-291239) Calling .GetState
I0717 18:20:34.178141  411308 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:34.178217  411308 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:34.194565  411308 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
I0717 18:20:34.195071  411308 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:34.195601  411308 main.go:141] libmachine: Using API Version  1
I0717 18:20:34.195632  411308 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:34.196003  411308 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:34.196222  411308 main.go:141] libmachine: (functional-291239) Calling .DriverName
I0717 18:20:34.196438  411308 ssh_runner.go:195] Run: systemctl --version
I0717 18:20:34.196464  411308 main.go:141] libmachine: (functional-291239) Calling .GetSSHHostname
I0717 18:20:34.199072  411308 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:34.199454  411308 main.go:141] libmachine: (functional-291239) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:bb:9f", ip: ""} in network mk-functional-291239: {Iface:virbr1 ExpiryTime:2024-07-17 19:17:15 +0000 UTC Type:0 Mac:52:54:00:19:bb:9f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-291239 Clientid:01:52:54:00:19:bb:9f}
I0717 18:20:34.199481  411308 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined IP address 192.168.39.137 and MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:34.199592  411308 main.go:141] libmachine: (functional-291239) Calling .GetSSHPort
I0717 18:20:34.199778  411308 main.go:141] libmachine: (functional-291239) Calling .GetSSHKeyPath
I0717 18:20:34.199938  411308 main.go:141] libmachine: (functional-291239) Calling .GetSSHUsername
I0717 18:20:34.200074  411308 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/functional-291239/id_rsa Username:docker}
I0717 18:20:34.280547  411308 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:20:34.319059  411308 main.go:141] libmachine: Making call to close driver server
I0717 18:20:34.319077  411308 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:34.319413  411308 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:34.319429  411308 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:34.319443  411308 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:34.319455  411308 main.go:141] libmachine: Making call to close driver server
I0717 18:20:34.319465  411308 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:34.319711  411308 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:34.319728  411308 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:34.319744  411308 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-291239 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-291239
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6f7a6fb6d704ea2197145afb7162687790693472f59ed814d5c7b34c392c3ee4
repoDigests:
- localhost/minikube-local-cache-test@sha256:84a50d074fd114ad6ea21eb459dde756341a13237c98f0d3377fb675df8bdd47
repoTags:
- localhost/minikube-local-cache-test:functional-291239
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-291239 image ls --format yaml --alsologtostderr:
I0717 18:20:32.139062  411229 out.go:291] Setting OutFile to fd 1 ...
I0717 18:20:32.139217  411229 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:32.139228  411229 out.go:304] Setting ErrFile to fd 2...
I0717 18:20:32.139234  411229 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:32.139414  411229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
I0717 18:20:32.140064  411229 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:32.140183  411229 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:32.140646  411229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:32.140703  411229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:32.155647  411229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
I0717 18:20:32.156078  411229 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:32.156679  411229 main.go:141] libmachine: Using API Version  1
I0717 18:20:32.156701  411229 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:32.157038  411229 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:32.157234  411229 main.go:141] libmachine: (functional-291239) Calling .GetState
I0717 18:20:32.158951  411229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:32.158989  411229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:32.174207  411229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32947
I0717 18:20:32.174620  411229 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:32.175161  411229 main.go:141] libmachine: Using API Version  1
I0717 18:20:32.175182  411229 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:32.175572  411229 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:32.175790  411229 main.go:141] libmachine: (functional-291239) Calling .DriverName
I0717 18:20:32.175981  411229 ssh_runner.go:195] Run: systemctl --version
I0717 18:20:32.176006  411229 main.go:141] libmachine: (functional-291239) Calling .GetSSHHostname
I0717 18:20:32.178710  411229 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:32.179069  411229 main.go:141] libmachine: (functional-291239) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:bb:9f", ip: ""} in network mk-functional-291239: {Iface:virbr1 ExpiryTime:2024-07-17 19:17:15 +0000 UTC Type:0 Mac:52:54:00:19:bb:9f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-291239 Clientid:01:52:54:00:19:bb:9f}
I0717 18:20:32.179096  411229 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined IP address 192.168.39.137 and MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:32.179233  411229 main.go:141] libmachine: (functional-291239) Calling .GetSSHPort
I0717 18:20:32.179395  411229 main.go:141] libmachine: (functional-291239) Calling .GetSSHKeyPath
I0717 18:20:32.179532  411229 main.go:141] libmachine: (functional-291239) Calling .GetSSHUsername
I0717 18:20:32.179661  411229 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/functional-291239/id_rsa Username:docker}
I0717 18:20:32.263112  411229 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 18:20:32.316355  411229 main.go:141] libmachine: Making call to close driver server
I0717 18:20:32.316375  411229 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:32.316788  411229 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:32.316788  411229 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:32.316841  411229 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:32.316858  411229 main.go:141] libmachine: Making call to close driver server
I0717 18:20:32.316871  411229 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:32.317108  411229 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:32.317123  411229 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:32.317148  411229 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-291239 ssh pgrep buildkitd: exit status 1 (197.305134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image build -t localhost/my-image:functional-291239 testdata/build --alsologtostderr
2024/07/17 18:20:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 image build -t localhost/my-image:functional-291239 testdata/build --alsologtostderr: (3.215165624s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-291239 image build -t localhost/my-image:functional-291239 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> eba29fe550f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-291239
--> 87f8f384a8e
Successfully tagged localhost/my-image:functional-291239
87f8f384a8eac2e9aedcfe5e2ac0075606e113001305c572b1dd252ef2c5b05c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-291239 image build -t localhost/my-image:functional-291239 testdata/build --alsologtostderr:
I0717 18:20:32.567317  411283 out.go:291] Setting OutFile to fd 1 ...
I0717 18:20:32.567453  411283 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:32.567466  411283 out.go:304] Setting ErrFile to fd 2...
I0717 18:20:32.567473  411283 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 18:20:32.567681  411283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
I0717 18:20:32.568398  411283 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:32.569111  411283 config.go:182] Loaded profile config "functional-291239": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 18:20:32.569527  411283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:32.569572  411283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:32.586718  411283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
I0717 18:20:32.587263  411283 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:32.587974  411283 main.go:141] libmachine: Using API Version  1
I0717 18:20:32.588008  411283 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:32.588524  411283 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:32.588923  411283 main.go:141] libmachine: (functional-291239) Calling .GetState
I0717 18:20:32.591146  411283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 18:20:32.591208  411283 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 18:20:32.609439  411283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
I0717 18:20:32.609917  411283 main.go:141] libmachine: () Calling .GetVersion
I0717 18:20:32.610443  411283 main.go:141] libmachine: Using API Version  1
I0717 18:20:32.610484  411283 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 18:20:32.610882  411283 main.go:141] libmachine: () Calling .GetMachineName
I0717 18:20:32.611081  411283 main.go:141] libmachine: (functional-291239) Calling .DriverName
I0717 18:20:32.611377  411283 ssh_runner.go:195] Run: systemctl --version
I0717 18:20:32.611415  411283 main.go:141] libmachine: (functional-291239) Calling .GetSSHHostname
I0717 18:20:32.614573  411283 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:32.614982  411283 main.go:141] libmachine: (functional-291239) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:bb:9f", ip: ""} in network mk-functional-291239: {Iface:virbr1 ExpiryTime:2024-07-17 19:17:15 +0000 UTC Type:0 Mac:52:54:00:19:bb:9f Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-291239 Clientid:01:52:54:00:19:bb:9f}
I0717 18:20:32.615008  411283 main.go:141] libmachine: (functional-291239) DBG | domain functional-291239 has defined IP address 192.168.39.137 and MAC address 52:54:00:19:bb:9f in network mk-functional-291239
I0717 18:20:32.615184  411283 main.go:141] libmachine: (functional-291239) Calling .GetSSHPort
I0717 18:20:32.615386  411283 main.go:141] libmachine: (functional-291239) Calling .GetSSHKeyPath
I0717 18:20:32.615568  411283 main.go:141] libmachine: (functional-291239) Calling .GetSSHUsername
I0717 18:20:32.615722  411283 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/functional-291239/id_rsa Username:docker}
I0717 18:20:32.722709  411283 build_images.go:161] Building image from path: /tmp/build.73140794.tar
I0717 18:20:32.722775  411283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 18:20:32.736400  411283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.73140794.tar
I0717 18:20:32.741214  411283 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.73140794.tar: stat -c "%s %y" /var/lib/minikube/build/build.73140794.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.73140794.tar': No such file or directory
I0717 18:20:32.741245  411283 ssh_runner.go:362] scp /tmp/build.73140794.tar --> /var/lib/minikube/build/build.73140794.tar (3072 bytes)
I0717 18:20:32.775576  411283 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.73140794
I0717 18:20:32.790685  411283 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.73140794 -xf /var/lib/minikube/build/build.73140794.tar
I0717 18:20:32.803058  411283 crio.go:315] Building image: /var/lib/minikube/build/build.73140794
I0717 18:20:32.803142  411283 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-291239 /var/lib/minikube/build/build.73140794 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 18:20:35.705722  411283 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-291239 /var/lib/minikube/build/build.73140794 --cgroup-manager=cgroupfs: (2.902533184s)
I0717 18:20:35.705810  411283 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.73140794
I0717 18:20:35.718168  411283 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.73140794.tar
I0717 18:20:35.728650  411283 build_images.go:217] Built localhost/my-image:functional-291239 from /tmp/build.73140794.tar
I0717 18:20:35.728689  411283 build_images.go:133] succeeded building to: functional-291239
I0717 18:20:35.728693  411283 build_images.go:134] failed building to: 
I0717 18:20:35.728726  411283 main.go:141] libmachine: Making call to close driver server
I0717 18:20:35.728743  411283 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:35.729054  411283 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:35.729075  411283 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:35.729083  411283 main.go:141] libmachine: Making call to close driver server
I0717 18:20:35.729082  411283 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
I0717 18:20:35.729094  411283 main.go:141] libmachine: (functional-291239) Calling .Close
I0717 18:20:35.729339  411283 main.go:141] libmachine: Successfully made call to close driver server
I0717 18:20:35.729365  411283 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 18:20:35.729366  411283 main.go:141] libmachine: (functional-291239) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.938461944s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-291239
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image load --daemon docker.io/kicbase/echo-server:functional-291239 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 image load --daemon docker.io/kicbase/echo-server:functional-291239 --alsologtostderr: (1.470018993s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image load --daemon docker.io/kicbase/echo-server:functional-291239 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-291239
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image load --daemon docker.io/kicbase/echo-server:functional-291239 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image save docker.io/kicbase/echo-server:functional-291239 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image rm docker.io/kicbase/echo-server:functional-291239 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-291239 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.412679359s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-291239
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-291239 image save --daemon docker.io/kicbase/echo-server:functional-291239 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-291239
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-291239
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-291239
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-291239
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (210.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-445282 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 18:22:13.090663  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
E0717 18:22:40.781576  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-445282 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m30.124036366s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (210.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-445282 -- rollout status deployment/busybox: (4.065965444s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-blwvw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-mcsw8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-xjpp8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-blwvw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-mcsw8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-xjpp8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-blwvw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-mcsw8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-xjpp8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-blwvw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-blwvw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-mcsw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-mcsw8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-xjpp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-445282 -- exec busybox-fc5497c4f-xjpp8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-445282 -v=7 --alsologtostderr
E0717 18:25:05.951758  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:05.957088  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:05.967426  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:05.987828  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:06.028187  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:06.108840  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:06.269181  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:06.589720  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:07.230785  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:08.511110  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:11.072290  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:16.193312  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:25:26.434214  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-445282 -v=7 --alsologtostderr: (56.416837582s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-445282 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp testdata/cp-test.txt ha-445282:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282:/home/docker/cp-test.txt ha-445282-m02:/home/docker/cp-test_ha-445282_ha-445282-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test_ha-445282_ha-445282-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282:/home/docker/cp-test.txt ha-445282-m03:/home/docker/cp-test_ha-445282_ha-445282-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test_ha-445282_ha-445282-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282:/home/docker/cp-test.txt ha-445282-m04:/home/docker/cp-test_ha-445282_ha-445282-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test_ha-445282_ha-445282-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp testdata/cp-test.txt ha-445282-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m02:/home/docker/cp-test.txt ha-445282:/home/docker/cp-test_ha-445282-m02_ha-445282.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test_ha-445282-m02_ha-445282.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m02:/home/docker/cp-test.txt ha-445282-m03:/home/docker/cp-test_ha-445282-m02_ha-445282-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test_ha-445282-m02_ha-445282-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m02:/home/docker/cp-test.txt ha-445282-m04:/home/docker/cp-test_ha-445282-m02_ha-445282-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test_ha-445282-m02_ha-445282-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp testdata/cp-test.txt ha-445282-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt ha-445282:/home/docker/cp-test_ha-445282-m03_ha-445282.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test_ha-445282-m03_ha-445282.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt ha-445282-m02:/home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test_ha-445282-m03_ha-445282-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m03:/home/docker/cp-test.txt ha-445282-m04:/home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test_ha-445282-m03_ha-445282-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp testdata/cp-test.txt ha-445282-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3528186093/001/cp-test_ha-445282-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt ha-445282:/home/docker/cp-test_ha-445282-m04_ha-445282.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282 "sudo cat /home/docker/cp-test_ha-445282-m04_ha-445282.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt ha-445282-m02:/home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m02 "sudo cat /home/docker/cp-test_ha-445282-m04_ha-445282-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 cp ha-445282-m04:/home/docker/cp-test.txt ha-445282-m03:/home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 ssh -n ha-445282-m03 "sudo cat /home/docker/cp-test_ha-445282-m04_ha-445282-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.48025335s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-445282 node delete m03 -v=7 --alsologtostderr: (16.407348668s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (331.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-445282 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 18:40:05.951834  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:41:28.997383  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:42:13.091331  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-445282 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m31.040164889s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (331.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-445282 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-445282 --control-plane -v=7 --alsologtostderr: (1m14.043540143s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-445282 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-502356 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0717 18:45:05.951455  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-502356 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.298437818s)
--- PASS: TestJSONOutput/start/Command (55.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-502356 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-502356 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-502356 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-502356 --output=json --user=testUser: (7.366262075s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-309405 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-309405 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.221091ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6c1ffe9-6482-4f5e-957d-ce7597081fb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-309405] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fd49d36-2cfb-4919-9000-a57e55dcf51b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19282"}}
	{"specversion":"1.0","id":"b25da127-083c-4fbb-b5c6-5df5bb1edab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"45fb099f-016a-482d-ad13-dd7ab07079b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig"}}
	{"specversion":"1.0","id":"ed282359-01ab-4cc5-81f1-95a874957ce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube"}}
	{"specversion":"1.0","id":"f132cf05-79e3-4bc8-bbbb-ca1225375679","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aebd2bb7-99d6-49ba-9756-8d75df7f9a5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd56944c-321f-4449-ab9d-f1b5ef5e3d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-309405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-309405
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-333565 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-333565 --driver=kvm2  --container-runtime=crio: (40.335685044s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-336506 --driver=kvm2  --container-runtime=crio
E0717 18:47:13.090556  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-336506 --driver=kvm2  --container-runtime=crio: (43.912657222s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-333565
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-336506
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-336506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-336506
helpers_test.go:175: Cleaning up "first-333565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-333565
--- PASS: TestMinikubeProfile (86.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-831202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-831202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.884327873s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-831202 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-831202 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-847415 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-847415 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.155901591s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-831202 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-847415
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-847415: (1.275206389s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-847415
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-847415: (20.14486715s)
--- PASS: TestMountStart/serial/RestartStopped (21.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-847415 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717026 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 18:50:05.951485  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 18:50:16.143639  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717026 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.733131442s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-717026 -- rollout status deployment/busybox: (4.302267811s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-5vj5m -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-gj58q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-5vj5m -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-gj58q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-5vj5m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-gj58q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-5vj5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-5vj5m -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-gj58q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-717026 -- exec busybox-fc5497c4f-gj58q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-717026 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-717026 -v 3 --alsologtostderr: (51.566627693s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-717026 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp testdata/cp-test.txt multinode-717026:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026:/home/docker/cp-test.txt multinode-717026-m02:/home/docker/cp-test_multinode-717026_multinode-717026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test_multinode-717026_multinode-717026-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026:/home/docker/cp-test.txt multinode-717026-m03:/home/docker/cp-test_multinode-717026_multinode-717026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test_multinode-717026_multinode-717026-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp testdata/cp-test.txt multinode-717026-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt multinode-717026:/home/docker/cp-test_multinode-717026-m02_multinode-717026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test_multinode-717026-m02_multinode-717026.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m02:/home/docker/cp-test.txt multinode-717026-m03:/home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test_multinode-717026-m02_multinode-717026-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp testdata/cp-test.txt multinode-717026-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4227061913/001/cp-test_multinode-717026-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt multinode-717026:/home/docker/cp-test_multinode-717026-m03_multinode-717026.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026 "sudo cat /home/docker/cp-test_multinode-717026-m03_multinode-717026.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 cp multinode-717026-m03:/home/docker/cp-test.txt multinode-717026-m02:/home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 ssh -n multinode-717026-m02 "sudo cat /home/docker/cp-test_multinode-717026-m03_multinode-717026-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-717026 node stop m03: (1.42973961s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717026 status: exit status 7 (418.953327ms)

                                                
                                                
-- stdout --
	multinode-717026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr: exit status 7 (430.212998ms)

                                                
                                                
-- stdout --
	multinode-717026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:51:59.535072  428718 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:51:59.535194  428718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:51:59.535203  428718 out.go:304] Setting ErrFile to fd 2...
	I0717 18:51:59.535207  428718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:51:59.535395  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 18:51:59.535544  428718 out.go:298] Setting JSON to false
	I0717 18:51:59.535577  428718 mustload.go:65] Loading cluster: multinode-717026
	I0717 18:51:59.535632  428718 notify.go:220] Checking for updates...
	I0717 18:51:59.535937  428718 config.go:182] Loaded profile config "multinode-717026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:51:59.535957  428718 status.go:255] checking status of multinode-717026 ...
	I0717 18:51:59.536328  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.536379  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.551966  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42019
	I0717 18:51:59.552452  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.553018  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.553039  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.553455  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.553666  428718 main.go:141] libmachine: (multinode-717026) Calling .GetState
	I0717 18:51:59.555637  428718 status.go:330] multinode-717026 host status = "Running" (err=<nil>)
	I0717 18:51:59.555657  428718 host.go:66] Checking if "multinode-717026" exists ...
	I0717 18:51:59.555958  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.556007  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.572396  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I0717 18:51:59.572877  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.573443  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.573475  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.573813  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.574004  428718 main.go:141] libmachine: (multinode-717026) Calling .GetIP
	I0717 18:51:59.577379  428718 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:51:59.577857  428718 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:51:59.577896  428718 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:51:59.578083  428718 host.go:66] Checking if "multinode-717026" exists ...
	I0717 18:51:59.578424  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.578468  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.594729  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42617
	I0717 18:51:59.595179  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.595681  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.595703  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.596031  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.596241  428718 main.go:141] libmachine: (multinode-717026) Calling .DriverName
	I0717 18:51:59.596465  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:51:59.596514  428718 main.go:141] libmachine: (multinode-717026) Calling .GetSSHHostname
	I0717 18:51:59.598985  428718 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:51:59.599436  428718 main.go:141] libmachine: (multinode-717026) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e6:56", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:49:08 +0000 UTC Type:0 Mac:52:54:00:36:e6:56 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-717026 Clientid:01:52:54:00:36:e6:56}
	I0717 18:51:59.599460  428718 main.go:141] libmachine: (multinode-717026) DBG | domain multinode-717026 has defined IP address 192.168.39.122 and MAC address 52:54:00:36:e6:56 in network mk-multinode-717026
	I0717 18:51:59.599577  428718 main.go:141] libmachine: (multinode-717026) Calling .GetSSHPort
	I0717 18:51:59.599729  428718 main.go:141] libmachine: (multinode-717026) Calling .GetSSHKeyPath
	I0717 18:51:59.599853  428718 main.go:141] libmachine: (multinode-717026) Calling .GetSSHUsername
	I0717 18:51:59.599990  428718 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026/id_rsa Username:docker}
	I0717 18:51:59.676036  428718 ssh_runner.go:195] Run: systemctl --version
	I0717 18:51:59.682590  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:51:59.698706  428718 kubeconfig.go:125] found "multinode-717026" server: "https://192.168.39.122:8443"
	I0717 18:51:59.698734  428718 api_server.go:166] Checking apiserver status ...
	I0717 18:51:59.698763  428718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:51:59.714200  428718 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0717 18:51:59.723842  428718 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:51:59.723893  428718 ssh_runner.go:195] Run: ls
	I0717 18:51:59.728697  428718 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0717 18:51:59.733803  428718 api_server.go:279] https://192.168.39.122:8443/healthz returned 200:
	ok
	I0717 18:51:59.733824  428718 status.go:422] multinode-717026 apiserver status = Running (err=<nil>)
	I0717 18:51:59.733843  428718 status.go:257] multinode-717026 status: &{Name:multinode-717026 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:51:59.733869  428718 status.go:255] checking status of multinode-717026-m02 ...
	I0717 18:51:59.734165  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.734208  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.751122  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44351
	I0717 18:51:59.751587  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.752083  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.752106  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.752434  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.752621  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetState
	I0717 18:51:59.754250  428718 status.go:330] multinode-717026-m02 host status = "Running" (err=<nil>)
	I0717 18:51:59.754270  428718 host.go:66] Checking if "multinode-717026-m02" exists ...
	I0717 18:51:59.754568  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.754609  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.770457  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0717 18:51:59.770872  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.771329  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.771354  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.771683  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.771948  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetIP
	I0717 18:51:59.774911  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | domain multinode-717026-m02 has defined MAC address 52:54:00:56:37:07 in network mk-multinode-717026
	I0717 18:51:59.775317  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:37:07", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:50:17 +0000 UTC Type:0 Mac:52:54:00:56:37:07 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-717026-m02 Clientid:01:52:54:00:56:37:07}
	I0717 18:51:59.775348  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | domain multinode-717026-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:56:37:07 in network mk-multinode-717026
	I0717 18:51:59.775483  428718 host.go:66] Checking if "multinode-717026-m02" exists ...
	I0717 18:51:59.775826  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.775869  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.791471  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0717 18:51:59.791897  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.792379  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.792400  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.792736  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.792975  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .DriverName
	I0717 18:51:59.793173  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:51:59.793197  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetSSHHostname
	I0717 18:51:59.795683  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | domain multinode-717026-m02 has defined MAC address 52:54:00:56:37:07 in network mk-multinode-717026
	I0717 18:51:59.796074  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:37:07", ip: ""} in network mk-multinode-717026: {Iface:virbr1 ExpiryTime:2024-07-17 19:50:17 +0000 UTC Type:0 Mac:52:54:00:56:37:07 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-717026-m02 Clientid:01:52:54:00:56:37:07}
	I0717 18:51:59.796104  428718 main.go:141] libmachine: (multinode-717026-m02) DBG | domain multinode-717026-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:56:37:07 in network mk-multinode-717026
	I0717 18:51:59.796220  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetSSHPort
	I0717 18:51:59.796411  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetSSHKeyPath
	I0717 18:51:59.796587  428718 main.go:141] libmachine: (multinode-717026-m02) Calling .GetSSHUsername
	I0717 18:51:59.796903  428718 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19282-392903/.minikube/machines/multinode-717026-m02/id_rsa Username:docker}
	I0717 18:51:59.883683  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:51:59.899317  428718 status.go:257] multinode-717026-m02 status: &{Name:multinode-717026-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:51:59.899358  428718 status.go:255] checking status of multinode-717026-m03 ...
	I0717 18:51:59.899694  428718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:51:59.899740  428718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:51:59.917401  428718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46823
	I0717 18:51:59.917836  428718 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:51:59.918361  428718 main.go:141] libmachine: Using API Version  1
	I0717 18:51:59.918386  428718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:51:59.918706  428718 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:51:59.918882  428718 main.go:141] libmachine: (multinode-717026-m03) Calling .GetState
	I0717 18:51:59.920192  428718 status.go:330] multinode-717026-m03 host status = "Stopped" (err=<nil>)
	I0717 18:51:59.920211  428718 status.go:343] host is not running, skipping remaining checks
	I0717 18:51:59.920219  428718 status.go:257] multinode-717026-m03 status: &{Name:multinode-717026-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 node start m03 -v=7 --alsologtostderr
E0717 18:52:13.090749  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-717026 node start m03 -v=7 --alsologtostderr: (38.676426176s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-717026 node delete m03: (1.817318105s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (183.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717026 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 19:02:13.091386  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717026 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m2.912202034s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-717026 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (183.43s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-717026
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717026-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-717026-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.260014ms)

                                                
                                                
-- stdout --
	* [multinode-717026-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-717026-m02' is duplicated with machine name 'multinode-717026-m02' in profile 'multinode-717026'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-717026-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-717026-m03 --driver=kvm2  --container-runtime=crio: (44.478696926s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-717026
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-717026: exit status 80 (205.024708ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-717026 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-717026-m03 already exists in multinode-717026-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-717026-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.57s)

                                                
                                    
x
+
TestScheduledStopUnix (115.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-164638 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-164638 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.591438194s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-164638 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-164638 -n scheduled-stop-164638
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-164638 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-164638 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-164638 -n scheduled-stop-164638
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-164638
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-164638 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 19:12:13.094085  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-164638
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-164638: exit status 7 (71.443907ms)

                                                
                                                
-- stdout --
	scheduled-stop-164638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-164638 -n scheduled-stop-164638
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-164638 -n scheduled-stop-164638: exit status 7 (61.491566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-164638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-164638
--- PASS: TestScheduledStopUnix (115.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (234.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3621854189 start -p running-upgrade-337952 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3621854189 start -p running-upgrade-337952 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.062543011s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-337952 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 19:14:48.999054  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:15:05.951194  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-337952 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.923494791s)
helpers_test.go:175: Cleaning up "running-upgrade-337952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-337952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-337952: (1.275697484s)
--- PASS: TestRunningBinaryUpgrade (234.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.020456ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-197252] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-197252 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-197252 --driver=kvm2  --container-runtime=crio: (1m35.53962835s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-197252 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --driver=kvm2  --container-runtime=crio: (6.996139326s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-197252 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-197252 status -o json: exit status 2 (236.180367ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-197252","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-197252
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-197252 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.513403695s)
--- PASS: TestNoKubernetes/serial/Start (27.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3213882024 start -p stopped-upgrade-788788 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3213882024 start -p stopped-upgrade-788788 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.029153717s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3213882024 -p stopped-upgrade-788788 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3213882024 -p stopped-upgrade-788788 stop: (3.15529869s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-788788 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-788788 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.915383447s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-197252 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-197252 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.31262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-197252
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-197252: (1.302786125s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-197252 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-197252 --driver=kvm2  --container-runtime=crio: (40.188058645s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-197252 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-197252 "sudo systemctl is-active --quiet service kubelet": exit status 1 (221.226785ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-369638 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-369638 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (148.183196ms)

                                                
                                                
-- stdout --
	* [false-369638] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19282
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:15:59.599510  439748 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:15:59.599656  439748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:59.599668  439748 out.go:304] Setting ErrFile to fd 2...
	I0717 19:15:59.599674  439748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:59.600017  439748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19282-392903/.minikube/bin
	I0717 19:15:59.600868  439748 out.go:298] Setting JSON to false
	I0717 19:15:59.602306  439748 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10703,"bootTime":1721233057,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:15:59.602442  439748 start.go:139] virtualization: kvm guest
	I0717 19:15:59.604740  439748 out.go:177] * [false-369638] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:15:59.606197  439748 out.go:177]   - MINIKUBE_LOCATION=19282
	I0717 19:15:59.606275  439748 notify.go:220] Checking for updates...
	I0717 19:15:59.608522  439748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:15:59.609731  439748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19282-392903/kubeconfig
	I0717 19:15:59.611120  439748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19282-392903/.minikube
	I0717 19:15:59.612452  439748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:15:59.613686  439748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:15:59.615557  439748 config.go:182] Loaded profile config "kubernetes-upgrade-442321": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 19:15:59.615787  439748 config.go:182] Loaded profile config "running-upgrade-337952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0717 19:15:59.615971  439748 config.go:182] Loaded profile config "stopped-upgrade-788788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0717 19:15:59.616157  439748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:15:59.673133  439748 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:15:59.677366  439748 start.go:297] selected driver: kvm2
	I0717 19:15:59.677395  439748 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:15:59.677407  439748 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:15:59.679490  439748 out.go:177] 
	W0717 19:15:59.680947  439748 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 19:15:59.682173  439748 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-369638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.196:8443
name: running-upgrade-337952
contexts:
- context:
cluster: running-upgrade-337952
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-337952
name: running-upgrade-337952
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-337952
user:
client-certificate: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.crt
client-key: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-369638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-369638"

                                                
                                                
----------------------- debugLogs end: false-369638 [took: 3.308183973s] --------------------------------
helpers_test.go:175: Cleaning up "false-369638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-369638
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-788788
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestPause/serial/Start (120.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-744869 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-744869 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m0.954974067s)
--- PASS: TestPause/serial/Start (120.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (120.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (2m0.897220204s)
--- PASS: TestNetworkPlugins/group/auto/Start (120.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-744869 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-744869 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.711472029s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m27.904503214s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nc8nj" [7f9af046-893a-49bf-a251-bfac2ede3209] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nc8nj" [7f9af046-893a-49bf-a251-bfac2ede3209] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004162572s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (1.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-744869 --alsologtostderr -v=5
E0717 19:20:05.951582  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-744869 --alsologtostderr -v=5: (1.170514836s)
--- PASS: TestPause/serial/Pause (1.17s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-744869 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-744869 --output=json --layout=cluster: exit status 2 (275.840302ms)

                                                
                                                
-- stdout --
	{"Name":"pause-744869","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-744869","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-744869 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-744869 --alsologtostderr -v=5: (1.195810306s)
--- PASS: TestPause/serial/Unpause (1.20s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.38s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-744869 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-744869 --alsologtostderr -v=5: (1.379914122s)
--- PASS: TestPause/serial/PauseAgain (1.38s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-744869 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-744869 --alsologtostderr -v=5: (1.206401366s)
--- PASS: TestPause/serial/DeletePaused (1.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.153120911s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (80.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m20.284744442s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (80.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m39.352785292s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (112.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m52.93217572s)
--- PASS: TestNetworkPlugins/group/flannel/Start (112.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6c6fm" [9447e65b-7fc2-4551-939a-a58a5cf5dfa0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.014484873s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zxfkk" [bcec7a59-c375-499f-b6af-3eb40b10a35e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zxfkk" [bcec7a59-c375-499f-b6af-3eb40b10a35e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004361518s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jzk6z" [cd2ee63f-80e9-4250-91be-d152ca52f594] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jzk6z" [cd2ee63f-80e9-4250-91be-d152ca52f594] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005769409s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m6.252561805s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-prr2s" [4a69a408-06cc-426d-b33f-4ff825aaf1be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00712993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7hsb5" [c1ec270d-cf69-4230-8a2b-c8bb97643484] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7hsb5" [c1ec270d-cf69-4230-8a2b-c8bb97643484] Running
E0717 19:22:13.090563  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004691781s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-369638 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m47.504094379s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pvjbz" [ffbe347f-a810-4a9c-b15b-cd7839b8af52] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00577962s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-g9qj9" [6dd3db22-de02-4bf9-a770-83fd8f46d492] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-g9qj9" [6dd3db22-de02-4bf9-a770-83fd8f46d492] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005333792s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kkckd" [503f4b6c-1f63-48fd-a660-3890b86f1bf9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kkckd" [503f4b6c-1f63-48fd-a660-3890b86f1bf9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003989817s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (130.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-713715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-713715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m10.233619646s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (130.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (115.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-637675 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 19:23:36.144757  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/addons-453453/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-637675 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m55.778969584s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (115.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-369638 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-369638 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-blmtt" [2b1ab2b8-271d-425c-aece-f8b0b8557d27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-blmtt" [2b1ab2b8-271d-425c-aece-f8b0b8557d27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003737713s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-369638 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-369638 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-378944 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 19:24:47.589139  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.594435  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.604695  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.624966  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.665365  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.746424  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:47.906854  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:48.227962  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:48.868705  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:50.149605  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:52.710466  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:24:57.831148  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
E0717 19:25:05.951316  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/functional-291239/client.crt: no such file or directory
E0717 19:25:08.072325  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/auto-369638/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-378944 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m35.573477926s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-637675 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f8bb5da-a3c0-4883-910f-44473cb5a766] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f8bb5da-a3c0-4883-910f-44473cb5a766] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005260521s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-637675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-713715 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75d9f921-4990-4f7c-99d5-f2976d35cd5d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75d9f921-4990-4f7c-99d5-f2976d35cd5d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004322008s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-713715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-637675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-637675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-713715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-713715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [12710ba3-14e3-4c6f-89a5-1c6c394076d7] Pending
helpers_test.go:344: "busybox" [12710ba3-14e3-4c6f-89a5-1c6c394076d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [12710ba3-14e3-4c6f-89a5-1c6c394076d7] Running
E0717 19:26:02.804373  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:02.809647  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:02.819995  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:02.840302  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:02.880738  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:02.961684  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:03.122241  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:26:03.443029  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004028369s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-378944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 19:26:04.084065  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-378944 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (680.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-637675 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 19:27:57.175392  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/custom-flannel-369638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-637675 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (11m20.200103753s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-637675 -n embed-certs-637675
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (680.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (582.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-713715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-713715 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (9m41.95813834s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-713715 -n no-preload-713715
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (582.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (599.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-378944 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 19:28:46.649596  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/calico-369638/client.crt: no such file or directory
E0717 19:28:52.817155  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:52.822469  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:52.832726  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:52.853029  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:52.893268  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:52.973566  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:53.134215  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:53.454824  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:54.095114  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:54.349566  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/flannel-369638/client.crt: no such file or directory
E0717 19:28:55.375887  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:28:57.937054  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
E0717 19:29:03.057365  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-378944 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (9m58.99834336s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-378944 -n default-k8s-diff-port-378944
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (599.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-998147 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-998147 --alsologtostderr -v=3: (3.459222126s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-998147 -n old-k8s-version-998147: exit status 7 (62.451107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-998147 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-500710 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 19:52:49.705081  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/enable-default-cni-369638/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-500710 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (47.459598277s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-500710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-500710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178629957s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-500710 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-500710 --alsologtostderr -v=3: (7.358621331s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-500710 -n newest-cni-500710
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-500710 -n newest-cni-500710: exit status 7 (64.590917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-500710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-500710 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 19:53:52.817146  400171 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/bridge-369638/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-500710 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (37.800521227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-500710 -n newest-cni-500710
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-500710 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-500710 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-500710 -n newest-cni-500710
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-500710 -n newest-cni-500710: exit status 2 (243.050477ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-500710 -n newest-cni-500710
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-500710 -n newest-cni-500710: exit status 2 (246.802429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-500710 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-500710 -n newest-cni-500710
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-500710 -n newest-cni-500710
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    

Test skip (40/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
273 TestNetworkPlugins/group/kubenet 3.34
281 TestNetworkPlugins/group/cilium 5.77
287 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-369638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.196:8443
name: running-upgrade-337952
contexts:
- context:
cluster: running-upgrade-337952
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-337952
name: running-upgrade-337952
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-337952
user:
client-certificate: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.crt
client-key: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-369638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-369638"

                                                
                                                
----------------------- debugLogs end: kubenet-369638 [took: 3.166764051s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-369638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-369638
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-369638 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-369638" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19282-392903/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.196:8443
name: running-upgrade-337952
contexts:
- context:
cluster: running-upgrade-337952
extensions:
- extension:
last-update: Wed, 17 Jul 2024 19:15:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: running-upgrade-337952
name: running-upgrade-337952
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-337952
user:
client-certificate: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.crt
client-key: /home/jenkins/minikube-integration/19282-392903/.minikube/profiles/running-upgrade-337952/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-369638

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-369638" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-369638"

                                                
                                                
----------------------- debugLogs end: cilium-369638 [took: 5.622337056s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-369638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-369638
--- SKIP: TestNetworkPlugins/group/cilium (5.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-728347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-728347
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard